Several of my friends and acquaintances on the American side of the pond have remarked recently that they are fervently wishing for a break from the endless barrage of election coverage and the increasingly strident rhetoric emanating from each camp. In case closing eyes and clamping hands over ears isn’t enough, the following may serve as some solace: it’s not meant to be seen or heard in any reasonable sense anyway. As a letter-writer in the 25th August edition of The Economist remarks,
“This election will be very close and more voters than usual seem to have made up their minds already. The surest path to victory for either candidate is to motivate their core supporters.”
Put another way, it’s no longer about turning independent voters into Democratic-leaning or Republican-leaning ones (let alone flipping votes entirely!), but simply about keeping the emotions of the appropriately-inclined voters sufficiently inflamed that they actually go to the trouble of expressing their inclination at the polling booth in November. So, if the torrent of propaganda seems purely designed to irritate instead of persuade… that’s because it is! Knowing that, good luck to you all in tuning out the noise.
So much for mathematics: I seem to be on a bit of a politico-philosophical bent this week. Today’s article “Grand Racist Party?” in The Economist‘s “Democracy in America” series is rather interesting to me, for two main reasons plus one footnote.
First, citing Alex Tabarrok, John Sides and Reihan Salam, the article takes issue with the standard (at least on the USA’s political left) and simplistic assumption that “racist Americans are almost entirely in one political coalition [i.e. the Republican Party] and not the other”. The correlation is (a) weaker and (b) more complex than it is assumed to be, and it’s point (b) that I particularly like. I always like it when someone realises that assuming uniqueness is screwing up their understanding of a problem: it’s not racism, it’s racisms. Quoting Reihan Salam:
“[The] changing demographic composition of the U.S. population, and the changing cultural landscape, has given rise to other intercultural frictions, e.g., between non-Latino black Americans and Latinos, between non-Asians and Asians, etc. As we take into account these other forms of prejudice, one assumes that a very complex picture would emerge.”
In other words, when you try to map a whole multi-nodal network of interactions down onto a simple left-right political axis, you grossly over-simplify the problem and get it wrong. Straightforward white–v.–black prejudices might stand a chance of correlating well to a two-party split; the full complexity of American culture stands no such chance in my view.
Continue reading “Racisms, Liberties and Bulk Phenomena”
Whether or not sanity is statistical (as the interrogator O’Brien in George Orwell’s Nineteen Eighty-Four would have it), it is deeply political, even geo-political. What exactly constitutes sanity and whether or not certain critical individuals possess it are questions of great importance. Two examples of the loom large in my mind at the moment.
The first is the trial of Anders Behring Breivik, in which a verdict is expected within a week. The question is not one of the accused’s guilt, but of his sanity. Breivik seems happy to claim factual responsibility for the killings in Oslo and at Utøya Island; he claims moral culpability, but the court will award him that and sentence him accordingly if and only if it is satisfied that he is sane. The magnitude of Breivik’s crimes calls forth competing popular demands that he be regarded as insane (for what sane human could do something so vicious?) and sane (if nothing else, to deny him the “soft option” of psychiatric as opposed to penal incarceration). Breivik has articulated a rather lengthy and detailed philosophical and political position in justification of his acts. Is sanity the same as rationality? If so, then the question for those in the “he is insane” camp is this: is it his premises or his reasoning that are at fault? For me personally, the worrisome aspect is the “premises” side. But on to the second case…
Continue reading “The Politics of Sanity”
“In re mathematica ars proponendi pluris facienda est quam solvendi.”
(In mathematics the art of asking [questions] is more valuable than solving [them].)
— Georg Cantor (1845–1918), Doctoral Thesis, 1867
“The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.”
— Antony Jay (1930–)
It’s a common but still dispiriting experience for me to see an excellent tool being applied to the wrong problem. Conclusions are only as sound as (a) the logical reasoning used and, crucially, (b) the validity of the premises, which includes the applicability of the method itself. Both “the right answer to the wrong question” and “the wrong answer to the right question” are wrong, but the former is more devastating because it carries an aura of (false) respectability that can lead one into making bad decisions with great confidence.
This part of MA398 Matrix Analysis and Algorithms concerns direct methods for the solution of systems of simultaneous linear equations (SLE); iterative methods will be covered in the next part. To recall, the (SLE) problem is, given A ∈ ℂn×n and b ∈ ℂn, to find x ∈ ℂn such that Ax = b. It will be assumed throughout that A is invertible.
Since I’m using quite a bit of Python for other projects at the moment, my natural tendency is to write up the various algorithms in Python-like pseudo-code. So, in the code sections, vectors will be single-subscript arrays
x, where the ith entry is denoted
x[i]; however, I’ll stick to the mathematical convention of having the first index be 1 instead of 0. Matrices will have entries
A[i, j]. An expression like
x[m:n] denotes that portion of an array
x that has index greater than or equal to
m and strictly less than
n. The n×n identity matrix will be denoted
eye(n), and the m×n zero matrix by
Continue reading “MA398.5: Simultaneous Linear Equations – Direct Methods”
The topic of this part of MA398 Matrix Analysis and Algorithms is the computational cost of solving numerical problems, as measured by the number of algebraic operations — and, in particular, how that depends upon the size of the problem. For simplicity, we will neglect issues like memory usage, parallel communications costs and so on, and define the cost of an algorithm to be the total number of additions, subtractions, multiplications, divisions and square roots performed during one run of the algorithm.
To describe asymptotic costs as the problem size becomes large, we will use the following notation:
g = O(f) if limsupx→∞ g(x)⁄f(x) <; ∞,
g = Ω(f) if liminfx→∞ g(x)⁄f(x) >; 0,
g = Θ(f) if g = O(f) and g = Ω(f),
g ∼ f if limx→∞ g(x)⁄f(x) = 1.
These notes will not attempt to formally define what constitutes “an algorithm”. This can be a tricky point, and is especially important if one wants to rigorously establish non-trivial lower bounds on computational costs: if simply guessing the right answer and returning it in one operation is a valid algorithm, then the lower bounds are trivially 1 in all cases.
Continue reading “MA398.4: Complexity of Algorithms”
The topic of this third part of MA398 Matrix Analysis and Algorithms is the study of the (usually inevitable) differences between computed and mathematically exact results, particularly in our three prototypical problems of simultaneous linear equations (SLE), least squares (LSQ), and eigenvalue/eigenvector problems (EVP). The first key point is that we split the error analysis into two parts: conditioning, which is a property of the problem alone, and stability, which is a property of the algorithm used to (approximately) solve the problem.
A useful point of view is that of backward error analysis: the computed solution to the problem is viewed as the exact solution to a perturbed problem. If that perturbation in “problem space” is small then the algorithm is called backward stable, and unstable otherwise. Once the original and perturbed problems have been identified, we wish to understand the difference between their respective exact solutions — this is the issue of conditioning. The three prototypical problems can all be viewed as solving an equation of the form G(y, w) = 0, where w denotes the data that define the problem and y the solution.
In the backward error analysis point of view, the computed solution ŷ solves G(ŷ, ŵ) = 0. Conditioning concerns the estimation of Δy ≔ y−ŷ in terms of Δw ≔ w−ŵ.
Continue reading “MA398.3: Stability and Conditioning”