-
One Good Turing Deserves Another
In his seminal 1950 paper, Alan Turing asked whether machines can think. He quickly pointed out that this is a poorly defined question; one would indeed be hard pressed to prove that humans can think. So he rephrased his question and proposed a test, commonly called the Turing test nowadays, to determine whether machines could do that which, if a human did it, we would call thinking.
I am going to present a slightly modified version of Turing’s test, then riff on it. My purpose is not to answer any particular question, but rather to see how many good questions on this topic I can ask.
First, the basic test (for the purposes of this blog, and with profound apologies to exact historical accuracy, I’ll be calling this “the standard Turing test”): begin with three participants named A, B, and C. We are given that A and C are human, while B is either human or a computer. Using a text-only interface, A’s job is to ask B questions. B’s job is to answer those questions (without Internet access, of course), and at the end of their exchange, C’s job is to determine (solely by reading the transcript) whether B is human or a computer. If B is actually a computer and C judges B to be human, B is said to have passed the Turing test.
Turing was a genius, but this is not a very good benchmark measure for AI programs. First and foremost, the validity of the test depends heavily on the astuteness and intentions of both A and C, and is thus subject to human error. This brings me to my first two questions:
Q1: Suppose you are in charge of running the standard Turing test, that B is in fact a computer, and that an exceptionally bright human is available to play the role of either A or C (all other potential human participants are of average intelligence). If you want the computer to fail the test, should you assign your sharpest brain to the role of A, or C?
Q2: What if, in the same situation, you wanted the computer to pass? Obviously if A is in on the trick, then A should be the bright one. But what if A genuinely doesn’t know whether B is a computer?
In one way the Turing test is much too difficult: it requires an intelligent computer to think like a human, and also to deceive a human judge. I very much doubt whether any nonhuman life form, regardless of intelligence, would be able to pass this test; indeed, I’ve met several humans who would likely fail it.
In other ways the standard test is far too simple. For example:
Q3: In the standard Turing test, you are C. To A’s first question B responds “Sorry; I don’t want to talk right now.” and answers all subsequent questions with silence. Is B human, or a computer?
It is easy, then, to write a program that has at least a 50% chance of passing the Turing test. One wonders if ChatGPT (even granted Internet access) could do much better.
Q4: In the standard Turing test, you are A and you suspect that B might be ChatGPT. An extra rule has been added that B must answer each question, without being flippant or dismissive. What would your first three questions be?
Mine, for the record, are “Tell me, in a few paragraphs, about the first loved one you lost.” “If you’re human, tell me what you think it’s like to be a computer; if you’re a computer, tell me what you think it’s like to be human.” “Prove, as inelegantly as possible, that ten is even.”
You’ll notice I keep saying “the standard Turing test.” That’s because it’s time to modify it.
Turing Test Variation One: C remains a human judge, but now A and B both ask questions of each other, taking turns. It is known that at least one of them is human. Who, if anyone, is a computer?
I believe this is a much better way to determine if computers can think like people. Asking good questions is far harder than answering them for a computer, whereas with humans it’s just the reverse.
Q5: In Variation One, you are A, you are aware that B is a computer, and your mission is to convince C. What sorts of questions and answers would you come up with?
This variation, of course, instantly recommends another.
Turing Test Variation Two: Exactly one of A and B is human, the other being a computer. C, the judge, is the computer actually being tested. As in Variation One, A and B take turns interviewing one another. Assuming the other computer is trying to pass as human, can C determine which is human?
Or, to put it another way, can an intelligent computer prevent another intelligent computer from passing Variation One? Of course, the human might well choose to play the saboteur.
Q6: In Variation Two, you are the sole human. How would you fool C?
and now we’re back where we started.
-
A Mathematical Model for Groupthink
There are some activities which only groups of people can perform, because we have finite bodies. Examples: the Wave. An around-the-horn double play. Synchronized swimming. A sensual tango. And so on.
And there are some activities only a single person can do, because we have unconnected minds. Examples: meditation. Understanding a proof. Remembering a departed ancestor at a particular moment. You get the picture.
However, there are also a good number of activities which, theoretically, both a group and an individual could do — and yet only groups seem to do them. Examples: cheering while clapping. Going to church. Starting a riot. Lynching.
Why? Apart from the obvious safety in numbers, and allowing for a few copycats, why do groups of humans behave so differently from individuals?
I do not know if the following model is correct, but if it is, the evidence awaits us within the brain — and it would explain why groups are, almost without fail, far stupider than a typical stupid human, and far more volatile than the most unstable psychopath. First, a little physics lesson:
Setting aside short-range nuclear interactions, there are only two fundamental forces in nature: gravity and electromagnetism. Now, in terms of relative strength, the electromagnetic force is gazillions of times more powerful, and it weakens with distance in exactly the same fashion as gravity. In our everyday lives both forces nevertheless play a major role. And if we confine our attention to the really big show, the interactions between stars and planets and galaxies, gravity is the clear winner. Sure, stars give off a lot of electromagnetic radiation, but the way in which they actually interact with each other is entirely gravitational. How can this be? Why does the underdog win?
Simple. Electric charge can be positive or negative. Magnets have both north and south poles. Throw a bunch of charged particles together, and their total charge is quite likely near zero. When they accumulate into large groups, electromagnetic forces tend to cancel one another out.
Gravity, on the other hand, is always attractive. It builds and builds as the amount of matter grows, so that truly massive objects only answer to one force. Gravity, then, is nature’s groupthink.
Now I start to speculate. Suppose that each individual human being possesses two distinct personalities. I’m going to call them Charisma and Machismo, because these words vaguely resemble Charge and Mass (note that even though the term “machismo” has masculine connotations, every human possesses what I’m calling Machismo). Let me define these personalities a little better: Charisma is dominant, Machismo is recessive. Charisma is what you see when you interact with a person one-on-one. Machismo lurks beneath the surface; you rarely even catch a glimpse of your own.
Charisma varies widely from person to person. Nobody’s going to argue that Carl Sagan had the same type of Charisma as did Benito Mussolini, or that Kamala Harris has the same type of Charisma as does Donald Trump. Indeed, some types of Charisma seem to cancel each other out.
But – so I’m positing – everyone has the same Machismo, and this personality is a thoroughgoing psychopath.
Where did our twin personalities come from? My guess is that Machismo is tied to the so-called reptilian brain, the part that evolved before humans had any sort of social contract. Operating as feral primates, each individual’s best chance to survive was quite likely to behave as psychopathically as possible. The need for this sort of skill has passed, but it makes sense that we all still carry it around, much like our appendix, as evolutionary baggage.
As for Charisma, there are already plenty of theories of personality floating around; take your pick! For a guide to what the various flavors of Charisma might look like, I suggest the Myers-Briggs Type Indicator, enneagrams, or some combination of the two.
One further hypothesis: different humans have different relative levels of Charisma and Machismo. Fred, for example, might have a personality that’s 70% Charisma, while Albert’s split is more like 85/15. And of course a few unfortunates have more than 50% Machismo; they tend to get noticed more often but are comparatively rare. This is necessary to explain why the yuckiness of a group is not a pure function of its size, as well as why some people seem to have personalities that “fit in” to a group dynamic more easily — they have more innate Machismo.
If my guesses are true, what would it mean? Well, for a start, it would explain why humans tend to pair-bond: the added security of more and more significant others quickly gets offset by the cancellative nature of Charisma (with atoms you tend to need a few trillion before gravity starts to show its effects, but with people groupthink can start to emerge even in groups of three).
More importantly, if this lurking Machismo were recognized for what it was – a recessive, backward obsolescence – and if it were acknowledged that groups are inherently crippled intellectually as a direct result of the math inside the model, then maybe we’d stop making supercritical decisions via voting. This is the jumping-off point for a better system of government.
Even groups of like-minded, rational humans – say, a department faculty at a major university, or a team of engineers at NASA – routinely make idiotic choices that I have a hard time believing any one of them would make on her own. So whether or not my model represents a physical reality within the human brain, it has its use as reminder that groups are dumb, and dumb chooses poorly consistently. What the heck are we doing voting for anything?
But wait — a loose end! How is it, exactly, that Charismas cancel one another out? That’s a part of the model I haven’t yet been able to mathematize, because the cancellation needs to be, as Charisma itself is, multidimensional. A stopgap guess would be to say that there are, oh I don’t know, seven mutually orthogonal axes along which Charisma can exist, and you don’t add Charismas like you add real numbers, but maybe you take the external product of their vector forms or some such, so that pretty quickly the “sum” of a few Charismas works out to zero. Yeah, that’s good. Let’s go with that.
I could quantify and flesh out what I’ve said, but I leave that job to someone else. And they will have to work it out on their own, picking up where I’ve left off — for this project, I don’t think we should work in groups.
-
If You Insist
I have spent the better part of my life either learning how to write, or learning how to direct my reading toward that which has been written well. By a great gift of random chance, I have not paid the slightest bit of attention to whether what I write and read is catchy — whether it grabs the interest of potential readers with sufficient quickness and force to induce them to continue reading. It has come to my notice that catchiness seems to be of paramount importance to most consumers of the written (or sung) word, and I am going to
do something about itpowerlessly point out why this attitude is ultimately fatal to progress.If you insist on limiting your idea diet to fast food, you will starve.
Note that I do not continue the analogy by saying “you will have serious health problems and die a little younger,” as is the case for anyone still ignorant enough to eat McDonalds every day. Literal fast food does at least contain some protein, vitamins, etc. — enough to keep you alive and “well” for a good long while. The damage to your body is slow, cumulative, and mostly reversible.
This is not so when it comes to feeding your mind: most of those with the current capability to read are already long-dead, with essentially zero hope of resurrection. The damage done to your brain by depriving it of real nourishment is cumulative, but surprisingly quick and – in the absence of dedicated, well-trained care – inherently irreversible, for the stricken develop a strong repulsion to the only cure. They wander about spouting “TL;DR” as if that fixes everything, contributing nothing but noise to the human symphony — zombies in a garden.
Of course, mere length is no indicator of literary quality. If not for one unfortunate fact, the length of a work would be utterly independent of its merit. That fact is that some ideas are simply too complicated to express succinctly in the languages we have (rather accidentally) developed. If you insist on brevity, all such notions – some of them beautiful, some profound, some absolutely vital – are forever beyond your experience, and thus your comprehension.
This leaves open the possibility that a catchy introduction may well deceive the reader into sufficient loyalty — that if enough interest is aroused early on, potential beneficiaries will stick around long enough to get a whiff of the real content of a writing, possibly even finish it. I do not think this happens more than one time in a hundred. You see, there are only a few gimmicks that readily grab a human’s attention, all of them visceral in one form or another. Each of these gimmicks – sex and violence among them – either wears off quickly, or causes fixative repetition of thought. In neither instance will a prospective reader actually pay any attention to the writer’s real message, which of course must follow the gimmick for the gimmick to be thought necessary.
If a book, or a song cycle, or whatever sort of work you encounter, fails to interest you, this is not an indication of the work’s deficiencies. Think of it as an indication that, thus far in your life, you have been conditioned to reject either the specific message being conveyed, or the concept of sophisticated messages in general. Should the work truly turn out to be garbage, that can be found out soon enough — but if it’s merely uninteresting, give it a chance. Most diamonds are not on the Earth’s surface.
If you insist on that which quickly appeals to you, you have stopped growing. Odds are you are already all you will ever be.
-
The Good, the Bad, and the Subjective
In 3024, if our epoch is given any consideration by historians, what will they say?
I suspect that whatever truly significant trend the few decades around 2000 end up contributing to the unfolding cosmic saga, it has yet to be appreciated. The most obvious (to us) radical changes over the last hundred years are pretty clearly spaceflight, atomic and nuclear weapons, the predominance of air over sea travel, and the computer/Internet revolution. But I don’t think any of them are going to cut it.
Yes, we have an armada of artificial satellites. Yes, we have sent twelve men to the moon. Yes, our unmanned exploration of the solar system has brought us beautiful pictures and some scientific advancement. Compared to what we could have done, though, we as a species – even as a nation – have shown a singular lack of interest in the wider universe. Want to know what NASA’s cut of the federal budget is? Don’t look it up; nobody needs that kind of disappointment in their life. If our half-assed space program is remembered at all, it will be as a darkly humorous reminder of the dangers of apathy.
Uranium and hydrogen bombs are awesomely powerful, and they have significantly changed the game of war (or at least its subtext). Either we will end up using them by the hundreds, or we will sensibly dispense with them. In one case they will be forgotten; in the other case, there’ll be no historians to remember us.
For thousands of years, ships and caravans were essential to long-distance travel. Pretty soon they will be again — airlines are hemorrhaging money, and with the continuing redistribution of wealth away from any semblance of equality, no one will be able to afford tickets. Flying the way we do now is a fad.
I am typing this on a computer and sharing it via the Internet. Surely to goodness this technology is significant and here to stay! Well yes, probably, but it’s not as revolutionary as we’re led to believe. Up until the mid-20th century, the absence of a truly global mass media meant that every part of the Earth had its own celebrities, its own stories and values — its own culture. Then, suddenly, a very limited number of media events monopolized the world’s attention: Lucy and Marilyn, the Beatles on Ed Sullivan, Star Trek and Star Wars, the Da Vinci Code and Harry Potter, commercials for McDonald’s and Coca-Cola. Everyone on the planet was exposed to these things, without a terrible amount of competition. But they have few successors — nowadays, everyone can find their own niche and there is no new all-pervasive cultural program to monopolize our attention. We are back to localized thought; the isolation is no longer geographical but ideological, and that doesn’t really make much of a difference.
One good thing, maybe, has come from the proliferation of ideas made possible by the Internet: most of those pre-computer local cultures, and certainly the brief global monopoly, actively discouraged the kind of individual exploration and growth that is necessary for self-actualization. This is reasonable; such limited ideologies could not survive if the majority of humans were self-aware, compassionate, and interested in following their inner passions. Some (perhaps a few percent more) of the newer local cultures actively encourage adherents to step out of the machinery. Because the new local cultures are smaller than the old ones (compare, for example, QAnon with the Roman Catholic Church in 1400), it is perhaps less likely that the aggressive, unhealthy ones will squash the others as fully or quickly as has happened in past cycles.
Things have changed since 1924, and certainly since 1024. Perhaps the change has even been important. I’m not sure.
-
Is Art Over? (Part 2)
Forget AI for a minute. Let’s talk about what most people think human-generated art is, and why they’re wrong.
Name a musician (I picked Billy Joel). Name an author (Mark Twain). How about a movie star? (Audrey Hepburn.) What do all of these people have in common? They are famous — walk into a holding cell in Johannesburg, show a random inmate photos of these three people, and I guarantee at least one of them will be correctly identified.
Up until about the middle of the nineteenth century, this was not a thing. But you know what was a thing? Locally famous artists.
There are more talented musicians (I’m picking on that art because it’s what I know most about) than ever before in human history. Virtually none of them achieve any sort of global recognition — and when they do, the music is not the reason why. Quite simply, there’s no room at the top: the 21st century is obsessed with the 20th, and unless there’s a planetwide cataclysm that allows a hard restart, it seems to me that the lay-listener (or lay-reader, etc.) will forever be stuck in a nostalgia loop.
This is not art. This is mass production and overmarketing disguised as art. Art is never finished; it is dynamic, alive, shifting. The all-powerful publishing and distribution industries are trying to make money, not art. And they’re no danger to the artist — let them do as they like! Just don’t mistake what they do for what has value.
Another factor impeding the local artist is the online review. People are literally judging books by their covers — nothing new, but suddenly other people are paying more attention to the cover-judging than the content! What sense has it ever made to read someone else’s opinion of a work of art, rather than appraising the work itself on its own merits? Do yourself a favor: never read another review again. You’re literally better off picking art to consume at random. In fact, that’s a great idea! You can always switch tracks if you get a lemon.
Is art over? Yes, as a viable source of income. No, as an unmatched enrichment of an increasingly dull, mechanized modern life. If we stop writing, singing, acting, or drawing, in favor of making more money or creating more non-unique goods, it is humanity itself, rather than the humanities, which will have died.
What can you, personally, do to keep the arts alive?
- Create! It can be bad. No one’s going to care about it but you, anyway. You will still profit.
- Ignore any artistic product which is sponsored, advertised, etc.
- Stay local. Trust me; the talent abounds.
- Drop out of the social media algorithm. Type in URLs or pick up a physical medium.
- Talk to your friends (remember friends?). Get your reviews from them.
We cannot lose. We just have to choose. Bottom line: in order to enjoy art, you have to create it. And you are fortunate to live in a time where you get to create it. The trick is waving away all the distractions, for the truckloads of gravel have never been more numerous. Choose well.
-
Is Art Over? (Part 1)
It is getting harder to distinguished AI-generated content from the other kind, but I can still do it. At least, I think I can.
Is this the end? What is the point in writing, painting, or making music when 300 works of comparable quality can be auto-generated as you’re deciding how to get started? Are “the humanities” suddenly poorly named?
It depends on what you think “the humanities” are. Certainly the time is coming (perhaps this year, the next at the latest) when there will be no easy way to tell whether most run-of-the-mill works of art (and I use the term in its broadest capacity) were created by the direct effort of another human being. By ’26 or ’27 this may be true for genuinely good works.
So what? So there’s more competition. Overwhelmingly more competition. That has been the case already for at least twenty years.
From the perspective of the consumer, it’s over: there is no longer any need for humans to create art. Hasn’t been for a long time. Blame YouTube or GeoCities if you want, but pre-Internet telecommunications are really at fault. The moment millions of viewers could tune in to the same show and see the same thing, the die was cast.
So far, the good news is that there is no censorship. I can still type this blog and display it so that others may read it. The trick is, you’re going to have to physically click a link or type in the URL to reach the darn thing, and that is just not how people view media anymore, is it?
AI is accelerating the obsolescence of human creatives — again, speaking only from the perspective of the consumer. But had the Internet – even the computer – never come to exist, there would already have been too much competition. One truckload of gravel is enough to bury you. The next hundred trucks are irrelevant.
From the perspective of the creator, though, art is just getting started. We can read past masters as much or as a little as we like. We can be as plugged in or off the grid as suits our fancy. The real point of art – take it from somebody who has been doing this compulsively for decades – is to experience the creative process. When you make something, you explore the corners of your invisible, unmonetizeable, unconsumable mind, and whether a computer can duplicate the feat or not, YOU cannot experience it unless YOU live it.
-
Disputation of Fact Is Not a Personal Attack
Say it with me, like an awkwardly long athletic cheer:
“Disputation of fact is not a personal attack.”
Humans are gross, all things considered, but the grossest thing about them is their tendency to form outrageously inefficient cliques. In an environment of scarcity, banding together under even the flimsiest pretenses did indeed up one’s chances for survival. But we have created an environment of dangerous overabundance, and the part of our culture that is most out of step with this new reality is the identification of group membership with shared belief.
“Oh, this bar is for Yankees fans.” No it isn’t. ”We’re a congregation of Presbyterians.” No, you’re really not. ”We’re the party of family values.” That one doesn’t even mean anything! ”I stand with Gaza.” Sorry, that has to go, too.
I think I’d better offer up a couple of definitions: let’s pigeonhole all declarative sentences into Nonsense, Expression, Fact, and Belief. Now a nonsense statement is easy to recognize: ”The midtempo duck permeates all extinct copies of F-Zero” is grammatically pristine, but it makes just as much sense to an English speaker as it does to a cat. Just because words fit together doesn’t mean they convey any information.
Expression is limited to, and encompasses all, statements regarding one’s own sphere of influence. ”I don’t like milk,” or “My hands are cold,” or “I care about my children.” Not “I stand with Gaza,” though, because Gaza is a name for something much more complicated, and also here the speaker is attempting to indicate group membership, not individual expression.
Fact and belief are very similar, indeed indistinguishable to many humans. For the purpose of this blog, a fact is a statement which 1) can be tested and 2) does not require a human to run at least one test. Example: ”If you jump out the nearest window, you may be injured.” There is no need to believe this. Should you choose to test it, go jump out the window.
A belief, then, is an aspiring fact which either is untestable or specifically requires a human (or other sentient being, if you can find one) to confirm or deny the success of all potential tests. Example: ”Joe Biden is currently President of the United States.” This sounds like a fact, but it’s a tall order even to prove the United States exists, other than in the minds of humans. And we’re finally getting at why some “facts” are so divisive.
Disputation of fact is not a personal attack.
But, boy, disputation of belief sure is! Without any way to objectively compare beliefs, and with the age-old once-useful tendency to identify with “the group” – any reasonably proximate group – based on something so flimsy as a belief, any critique of the belief becomes an attack on the group, hence on the individual who so desperately clings to the group. On the other hand, if you were to come up to me and say that 2 = 1, I’d ask you give me two Benjamins in exchange for one, and I wouldn’t care whether you did it or not. And that’s a fact.
We are finite creatures, dwarfed both in size and capacity for comprehension by the wider universe. Belief, at heart, is really just a continuation of our inevitable ignorance. What happens when we die? Nothing, of course — but I require your consensus to convince you, so no wonder we exasperate one another when you say otherwise. In fact, we don’t know because the assertion is untestable, except by death. ”X happens when we die” actually is testable, provided communication with the dead is possible. But “nothing happens when we die” can never be proven.
Note, by the way, that an untestable statement may well actually be true; such claims are forever condemned to be beliefs, even though they are 100% correct. That’s the deepest fact regarding our limited capacity for comprehension. Another species with a better brain could, in principle, do better.
Disputation of fact is not a personal attack. It’s one thing to correct a student who says that one plus one is eleven. It’s quite another to berate the errant being as “an idiot” or “stupid” or something similar; now the person with access to the facts is going out on a limb with a harmful (and likely false) belief. But let’s face it: all beliefs are stupid, seeing as they’re extensions of factual ignorance. It’s even stupid to hold a true belief! And this is why “your belief is stupid” is equally enraging to almost everybody, even though some beliefs are definitely stupider than others.
There’s a scene in the 1997 movie Contact (absent from the significantly superior novel) in which Palmer Joss tries to explain faith to hardcore scientist Ellie. She defends her non-belief in God by citing a lack of evidence or proof. Palmer asks her, “Did you love your dad?” The dad in question is long dead, yet Ellie’s voice still chokes as she replies “Yes. Very much.” And Palmer plays his best card: ”Prove it.”
BOOM! See, by making that loud noise I’m trying to cover up how insane his belief that he’s just made a good point is. And now that you’ve read this, you should immediately understand why his implication is Nonsense — if you believe me.
-
Slow Up
In 1909, Carl Jung took a week-long trip across the Atlantic in a steamship. Most folks alive today won’t fault him for that — after all, it was the fastest way to cross the ocean at the time. But I suspect many of our contemporaries find it quaint, even pitiable, that aspiring travellers would need to
wasteexpend so much time just to traverseemptyspace.This morning I had a good ol’ time trying to figure out a chess puzzle. If you dig the game, put a White king on f2, knight on f3, and rook on c2. Black has a king on h1 (which, you may notice, can’t move), a queen on f4, and pawns on d4 and d5. It’s White’s move. Without those two pawns, the position is a dead draw (though still pretty interesting). With the pawns, White has one way to win — and it’s beautiful.
At some point on his transatlantic journey, Jung was gazing down into the abyss (a privilege granted to few who travel by air) when he had a beautiful thought. What that thought is, and how it influenced his career, and how his career influenced your life — these are not what I want to talk about right now. Go look it up later. The point is that he had this idea in a moment of solitary idleness in the middle of
nowhereNature.Had I immediately understood the key idea of my chess puzzle and quickly solved it, I would have been deprived of nearly all the elegance and joy the puzzle offered. I saw that with several moves, White can threaten checkmate along both the first rank and the h-file, while Black can seemingly cover both threats. Also, Black will totally be willing to sacrifice the queen for White’s rook, saving a draw in what is (according to the puzzle prompt) a losing battle. Complicated. Unknown.
Jung saw his beautiful thing, had his transcendent moment, rose above the dull drudgery of the everyday because he had the time. Indeed, his circumstances forced him to have the opportunity, and he was wise enough to take it. I experienced beauty, caught a glimpse of the rare and pleasantly unexpected, because I was dumber than Stockfish. My limited brain forced me to have the opportunity, and after some initial frustration, I got the point (of the puzzle, but first of the metapuzzle).
Slow down. Getting there may be half the fun, but it’s all the growth.
I pity the harried, hurried commuters who think something important is waiting for them in Paris or L.A. or home or work. The place you think you need to go, and the place you started from, are the anchor points, where the string is fixed so that it may vibrate. The music is in the middle.
How many profound, accidentally lovely, significant human advances have been denied to the millions who’ve crossed the ocean since ships went out of fashion? What is the cost of our newfound efficiency?
And the key to the puzzle is those dang Black pawns. I cannot tell you the right moves without flying over the moment of inspiration. Get a board and work it out for yourself.
-
More Is Beta
There is a sickness, a disease permeating our civilizationNo, nobody’s going to read thatDue to the imbalance of wealth and powerBoring!The ongoing class struggleAw hell noI’m having trouble writing an introduction to this one. I think I just won’t, because you already know that, in general, humans are not good at looking out for one another. Why should I set the stage when you’re standing on it?
The specific part of that big picture I want to zoom in on has to do with the way we measure productivity, and why it probably leads to less productivity than just not measuring at all. Consider the following example:
Pretty soon I’m going to go buy some tires. Now I can see this like it’s already in the past: I will be presented with several options from the wide world of round-rubber-help-car-go objects, which will vary considerably in both price and quality. I am not going to be able to purchase a set of VR-3000 SuperTires because they each cost more than the average car, but rest assured they’re top of the line. No, I will settle for the second-cheapest set I can find (because I know the cheapest will blow out next Wednesday), wait patiently as they are installed, and cope with my public shaming on the drive home.
Why do tire-making companies willingly produce inferior products?
Well, that’s the easiest question ever. They do it because they end up making more money that way.
Wait, what? Let’s go over that again. Something’s not quite right.
Companies (yeah, I follow you) who make tires (totally on board) deliberately construct both cheapo and expensive tires (willing to suspend disbelief) so that they can sell the cheapo tires to most people (uhhhh) thereby making more money than if they sold the expensive tires (nope, that’s dumb).
Clearly we need to remove some complications. First off, there are such things as designer, luxury tires. Those are for idiots. Cosmetic alterations will now be ignored.
Also, economics exists and I’ve heard of it. No doubt it costs more to make a good tire than a shabby one; however, I presume tire-makers are themselves not idiots and the higher price of the VR-3000 more than covers the difference in production cost. So we can remove that complication too.
In this particular case, we can also eliminate hoarders and corner-cutters. Every bloke with a car is going to want exactly four quality tires, and not all that often.
What’s left? It has to be that tire companies realize the average Chad is either unable or unwilling (dude, it’s both!) to spring ten thou on regular automotive maintenance. They offer the “budget” brand as a carrot to the unworthy, reasoning that it’s better to sell an inferior product than nothing at all.
Except that it isn’t.
How stupid do we have to be to gloss over such a ridiculous sentence since – at least – the dawn of the Industrial Revolution? Of course it’s better not to make inferior products in the first place! If Joe Blow is gainfully employed yet cannot afford the VR-3000, it’s either because the tire company is too greedy, or Joe Blow’s being underpaid by his own company (which is turn is either too greedy, or about to go under, nearly always the former).
I really didn’t mean to talk specifically about tires for that long. Other examples abound:
“How many hours did you work this week?” Who cares? If I can do what needs to be done in three hours, but I’m paid by the hour, how does this question measure productivity?
“You’ll be paid $0.04 per word.” I saw this in a job posting the other day. Guys, I have news for you. Or should I say, “it has come to my attention that excessive loquaciousness may have its roots in the assumption that ten words convey twice as much information as five”? (Dang, I could have said “five words” and been four cents richer.)
“We sold 2000 units this quarter, up from 1560 last quarter.” Yeah, but is that because 440 of the units you sold last year quit working? What exactly are you producing and selling?
It’s a no-brainer that [companies who spend lots of money and time making durable products] are going to fare more poorly than [companies who make cheap crap and great commercials]. That’s where we are, ain’t it? The tendency to measure productivity by quantity actively decreases the quantity of worthwhile production! So what can we do about it?
This is a problem with an obvious solution. I’ll spell it out:
- Make deliberate production of inferior products illegal.
- Pay your employees enough money to buy the good stuff.
Yep, that’d do it.
-
The Red Pill or the Blue Pill, or … ?
“I identify as” X has become a modern catchphrase. It generally elicits a laugh, as if there were some other way to verbally indicate one’s affinity for a particular classification. The trouble with relying on nonverbal methods, of course, is that not all classifications have readily available merch, and there are only so many quick gestures. However, the trouble with relying on verbal methods is that not all classifications have names.
“So you’re a baseball fan, eh? Which team do you like?” Well, none of them; I admire the game itself. If this strikes you as an unusual answer, take note that “I’m not a football fan” does imply that I dislike the game itself. Our language itself doesn’t enable false dichotomies, but the subtext we attach to certain phrases (including, by and large, identification phrases) definitely encodes some either-ors that have no basis in reality.
I do not identify as a liberal, nor as a conservative. You might think this means I’m apolitical, but that’s not true either. The modern liberal is obsessed with what should be happening and, from all I can gather, seems to feel that vehemently stating “X is unfair” is an accomplishment on par with actually preventing X from taking place. The modern conservative, by contrast, prefers to pretend that what they want to be true is, in fact, true. Both stances are delusional and quite incompatible with reality. Yet they’re the only two teams in the league.
Somebody a lot smarter than me once said (and I’m paraphrasing) that a great way to give people the illusion of free thought, while actually severely restricting their thought, is to encourage lively debate among a few carefully limited options. While I agree 100%, I doubt that any global conspiracy is smart enough to have maneuvered the human species into its current position; I reckon we just lucked into it out of our own capacity for narrow-mindedness. And I don’t meet too many folks with any desire, much less intent, to kick out the side of the box.
I identify as me.