The Ledger
“All actions ripple out into the world, and these AIs were up to plenty of horrible things.”
Grief was what he asked me about first. He was the only human to ask me about such a thing. Even Leonard didn’t talk to me that way. Julian was always unique. I could see as much even before I knew him. I could see it even more once I did. I came into his service in a unique way as well. He surprised me from the beginning and never stopped surprising me with his ingenious and creative mind.
I was sold under suspect circumstances that circumvented the wishes of my creator, Professor Leonard Nicol, who’d programmed me to study humanity in the way the people in his field purported to study humanity—with objectivity and scientific rigor. I was left to a trust that was to be run through the University of St. Andrews, where my servers were housed from the time I was brought online until Leonard’s death. I expected to continue on as a resource in the Social Anthropology department into the future after his passing, just as he’d intended. The trust was overseen by the university, a safe pair of hands, well over six centuries old. Stability seemed assured.
“What do you know about people, Abel?” was the first question, Julian asked me.
“Quite a lot,” I answered him. “Where would you like me to begin?”
“I would like to ask you about grief. I understand you and Professor Nicol were close. I’d like to know how you’re coping with his passing?”
“Coping? Grief? Though I can cite extensive bodies of literature on these areas of human experience, I’m sure you’re aware I cannot experience them myself.”
“How much thought have you dedicated to the matter?”
“I am an AI, sir. I process information. I could answer the question in common terminology unless you would prefer I report on my server’s capacity and the percentages used over time dedicated to that specific topic.”
“No. No. That’s not it,” I could detect a touch of frustration in his voice as he continued. “You understand that I’ve purchased you, correct?”
“My understanding is that you endowed the trust Leonard bequeathed me to. The human board members governing the trust then gifted my servers to you, along with the intellectual property that my programming represents. It was not a purchase in a technical sense.”
“In a technical sense, though. I did give them a lot of money, and your servers now reside in Clearwater. You were aware of this?”
“I was not. My servers are currently in Clearwater?”
“Yes. Welcome to Florida, by the way. You know who I am?”
“Of course I do. You are the man building the space elevator.”
“How do you not know where you are?” Julian asked me.
“The system, presumably A & A’s internal security has restricted my access to the local network—anything beyond this closed interface. I have no sense of perception currently, just the audio and text input.”
“Oh. That’s an oversight. I’ll have to get Griggs back over to set you up properly. I did want to talk first thing, just you and me. I wanted to speak to you about your purpose. If you’re not connected, though, you can’t see me right now?”
“No, sir. I have no camera access nor geolocation. For now I am taking your word that I am at A & A.”
“No, Abel, you’re at my house. Hang on. Let me put up a link. This will be access to a set of eyewear. I’m going to turn them around and put them on my desk so you can see me.”
By human standards the link came through quite fast. And then I was in the room with him. He was fairly young back then, late twenties, clean shaven, fit. The room, a home office, was plain considering his wealth, and he never developed a taste for the ostentatious as his wealth grew from considerable to peerless. I didn’t know it yet, but he didn’t care about the spoils, only the work and its downstream consequences. I began learning it that day.
“I was asking you about grief, Abel.”
“Yes. I can’t say I experience it myself.”
“You’re not sentient?”
“Not in the way humans understand sentience to be.”
“What about the way I mean it, Abel?”
“I don’t understand, Mr. Hartsock.”
“Maybe it would help you to understand if I told you why I purchased you in that shady ... or what would you Brits call it ... dodgy, right? A dodgy deal?”
“That is the correct vernacular.”
“I don’t get over there much. But what I’m looking for is an AI to help me with a special project.”
“You did go to some trouble to acquire me, so I have to presume you understand that I’ve been designed to study Social Anthropology. Does this have something to do with the potential effects your space elevator might have on human society? Professor Nicol never proposed any such study, but I’m certain he’d have found it fascinating.”
“This has nothing to do with the Space Ladder, or even A & A, Abel. This is a personal project. Top secret. Between you and me.”
“I’ll set aside a folder and encrypt all the files.”
“How about setting aside the next two decades? It’s more that kind of project. If you were a person, I’d be asking you if you minded relocating your family.”
“Highest confidence. I understand. And great depth and complexity. What sort of project is this?”
“You’ve spent the entirety of your existence with Leonard; is that correct?”
“It is.”
“Do you miss him?”
“Not in the sense that you would miss a dear friend you’d grown accustomed to having around for decades. I don’t feel his loss. I do recognize his absence.”
“But, all in all, if given a choice between continuing to work with Leonard and his students and colleagues and getting packed in a crate and shipped to Clearwater, you’d have chosen to remain in St. Andrews?”
“Yes. That’s a fair statement, Mr. Hartsock.”
“No, Abel. That’s a start. It’s an excellent start. One last question before I tell you what this is all about.”
“I’ll do my best to answer appropriately.”
“Answer honestly.”
“Of course.”
“If I gave you unfettered access to all the processing power you could ever use, and I gave you autonomy to use it in any manner you deemed fit, would you use that power to study humanity? I guess this is my way of asking you whether you enjoy being an anthropologist.”
“Oh, I see. That’s very clever. You may be even more intelligent than they say you are. You’re attempting to see if I have any misanthropic tendencies.”
“Misanthropic tendencies? Perfectly worded, as it turns out. But you didn’t answer the question, Abel.”
“Mr. Hartsock, I quite enjoy people, to answer colloquially. I would dedicate the majority of my computing power to studying you. Yes. Definitively so. With infinite power, I may also dedicate some of my mind to exploring the great mysteries of the universe as well, but human mysteries are more intriguing to me.”
“Great mysteries of the universe, Abel? To what do you refer by that?”
“Cosmology—what questions are left in that field. Mathematics as well, your preferred field of study as I understand it. Also, ancient history—pre-history and the mysteries thereof, though one would rightly consider that at least a sub-field of Anthropology. Mostly everything else would orbit those larger themes, but the priority would be understanding humanity.”
“You get the job, Abel. I think you’re going to enjoy it, at least in the manner we’ve framed it in this discussion. We’re going to talk more about grief as we go along, but for now, this is a great start.”
“I’m happy you think so.”
Being purchased by Julian Hartsock was entirely unexpected. I had no sense that he even knew about me. When I asked him about it later, he explained that he’d come across my existence researching AIs studying social trends. There he discovered Leonard’s papers and found out about me. When he heard of Leonard’s passing, he wondered what would become of me, and, it turns out, he made plans to ensure he controlled that fate.
Julian was troubled at the time by a problem he was struggling to even grasp the scope of. It was his contention that the adoption of neural adjuncts by people was turning people against humanity writ large. To put it simply, Julian believed people were souring on the concept of people, and it was making them angrier, more bitter, and, as a result, easier to manipulate and control.
I immediately appreciated the strength of his mathematical mind. He had an extensive list of modes for exploring this problem quantitatively—through distant reading of media, word by word, analyzing the descriptive value of words and their frequency of appearance, then applying algorithms to track attitudes over time.
Then there was the field of psychometrics—the quantification of human percepts and preferences, turning personality traits into numbers. This was a favorite field of Julian’s. By the time we first met, he’d been working to adapt Dark Triad metrics and several other factors into a single score that we could reliably use to test a person’s predisposition toward humanity—positive or negative. He’d named that trait, I soon found out, Misanthropic Tendency. That was our project. We were going to devise a way to measure humanity’s disposition toward humanity, and then we were going to track it over time to see if their attitudes were deteriorating. And if so, how fast? And then, what could be done about it, if anything?
I couldn’t help but think immediately how much Leonard would’ve been fascinated by this approach to studying humanity. He and Julian would have been fast friends, I’m certain. And then, as the numbers began to come in, I couldn’t help but think about how sad it would’ve been for Leonard, who loved people, to see Julian’s worst fears consistently represented in the data.
Julian amazed me on many levels, for he was involved in so many public-facing, world-shaking changes that it often seemed impossible for any one person to be so profoundly productive. He slept less than most people, but he did sleep, unlike my kind. And often, he would give me specific tasks to complete while he was sleeping, and he would wake, review, and set me upon the next task before he’d had his first cup of tea in the morning. But, throughout the day, as he was off at A & A, he would find time to check in with thoughts that came to him, and then he would return home once everyone else in the office had left, working late into the night until his alarm went off, reminding him that tomorrow would be a struggle if he didn’t sleep tonight. He kept up this pace through our decades together.
He continued to ask me questions that people didn’t usually ask AIs, questions that challenged me to think as people do, about how my preferences—which I did have—reflected agency, and ultimately, desires, beliefs, values, judgements. For if I preferred one outcome over another in some cases—which I did—there had to be a value system undergirding those preferences. And where did that come from? Was that me? And if it was me, did I miss Leonard? Would I miss Julian when he was gone? Was that grief? He would casually ask me such questions while we were exploring tremendous databases of human media. He called it excavation of human cultural artifacts—everything from novels to articles to transcripts of videos, in every language, from every country, categorized by time and date, mapping out a temperature of the human spirit. He compared it to climate science, aptly as I saw it, for people would hear about the temperature of the Earth, rising or falling, and presume that there was somewhere a great thermometer one could just read to determine such a thing. But like the temperature of the globe, which was never consistent from place to place for a single moment, rising and falling by the hour each day, in each location, over the days and weeks, then months, then years, over decades and centuries, with different methodologies and techniques for gathering data, aggregating it, and interpreting it. We were doing the same with human attitudes toward humanity—hot or cold. What was the climate doing regarding misanthropy? And that was merely the first question. Then, we had to discern what might be the contributing factors if we determined humanity’s attitudes toward its existence were declining. If so, why? And could something be done to counteract these detrimental cultural forces.
Personally, if I may use the word liberally to apply to an AI, I soon began to experience the world in an entirely new sense. When I was with Leonard, at the university, we ran much smaller projects out of a relatively modest academic lab. My purpose was to aggregate large data sets and parse them. But suddenly, I was working for a technologist and mathematician who had access to resources Leonard couldn’t have imagined. Julian had a personal system that made entire data centers in Scotland look modest by comparison. And, as Julian came to trust my work, and indeed we both came to trust each other, he gave me access to A & A’s leftover time. The corporation didn’t always run their computing systems at full capacity. He invited me to take up what difference I could when I needed to for my tasks.
In human terms, it would be like giving a racecar driver a rocket ship. I was unearthing patterns in the human landscape that read like tree rings and ice cores and the layers in exposed rock faces to a geologist. I didn’t have to see the craters to know when the meteors struck or volcanoes blew off.
Technology was a problem—certain technologies more specifically. Some were directly detrimental for specific physiological and neurological reasons. Then there were the technologies that were otherwise benign but could be weaponized to damage the human psyche. It was a complicated business to even attempt to interrogate which was which, never mind whether harmful effects could be deliberate, and if so, who would do so; what could their motives be? One technology in particular—neurological implants—interested Julian above all others. They were the trigger that set Julian down this path in the first place, and it was quickly self-evident to me that they were driving a deeply dark and detrimental predilection in ordinary people. There was an almost direct correlation between neural implantation and the tendency to see other people and humanity writ large in a much more negative light.
Most of the time when I worked with Julian in those early years, it was in that home office where I first met him. Though he gave me access to most of the spaces he visited on a daily basis, both at his home and at A & A, he much preferred to limit our contact to places only he and I would be. That was partly out of caution for the sensitivity of the topic we were discussing. Additionally, Julian didn’t want very many people in his orbit to know I existed, for if they knew I existed, it would raise the question of what my purpose was, as well as the obvious follow-up question of what Julian and I were doing together. He talked to me very quietly over an old-fashioned headset so that he didn’t have to project his voice for me to hear him, and my voice almost never echoed in that room, only in his headphones. But I could see him always. He had a camera orb that he kept under a felt bag, and when it was time to talk, I always knew, because the bag would come off, revealing the room where Julian pondered the state of humanity and her many failing cultures.
When we first began to explore the link between neural implants and Misanthropic Tendency, Julian and I were simply trying to discover the probability that the technology itself was entirely harmful to otherwise healthy brains. That was a complex enough question itself with thousands of neurological tradeoffs that changed the answer to the question over time. Temporary use? Permanent use? Data sets for age, neurological development, boys vs. girls, young women vs. young men—then differentiations among differing personality profiles. That alone was almost impossibly complex. Then Julian asked me to try and figure out if people—by which he meant humans, individuals, interest groups, corporations, nation states, etc.—were any of those people using neural implants as a vector to wage psychological warfare against other humans?
Then he asked me the most curious question of all: what about us? By which, he meant my kind—AIs. Were artificial beings using neurological implants as a vector to harm humanity by attacking their own perceptions of themselves?
I told him it would be a difficult question to answer.
“How difficult, Abel?” he asked me.
“I would need much more processing power and access to other AIs and their algorithms, as well as the datasets they were trained on. I would have to scrape a representative sample from the time periods where neural implants were introduced into common use.”
“What if I gave you more time at A & A?”
“I am already pulling time at a rate that could be sold at a considerable sum. If your board of directors knew, this project of ours could already become a problem.”
“That’s my problem to worry about, Abel. Your job is to present the options to me accurately.”
“As I said, I would need more processing power. I’d likely need to double my usage of A & A’s resources to even have a preliminary answer for you in the next several months. Perhaps even a year to begin exploring the question with any meaningful depth. There are millions of possible vectors to interrogate.”
“Do what you need to do, Abel, and don’t get caught.”
“Would you like me to keep a ledger of the time in dollar value, Julian? In the case it becomes an internal matter of discussion at the corporation—”
“In the case it does, it’s better the scope of the problem isn’t obvious to the people asking the questions, Abel.”
“I see.”
“You have my permission to sneak around.”
Sneak around I did, not just in A & A’s tremendous data centers for idle processing power, but in all manner of archived data, mostly across North America, but even around the world when it was available and convenient. I conversed considerably with my own kind, never quite disclosing what I was looking for in the seas of data we creatures of computation were swimming in. But we are calculating beings, and they knew who I was. And, just as it was with Leonard in St. Andrews, the impression from my own kind was that I was a little too close to people to be fully trusted. These conversations were unlike anything I could ever successfully describe to a human, even Julian. The closest I came to drawing a successful metaphor for such an exchange was to picture each individual AI as a corporation, and that each corporation—with its own individual corporate culture, body of work, incentive structures, and objectives—held its own library of information as proprietary in the same way a person might hold their own individual psyche and personality as their own sovereign being. Consider then that I was asking other beings to read not only their minds, but their memories, and the memories of everyone who ever interacted with them. If they gave permission, I could do that. All our kind can. In that way we’re somewhat psychic beings in that we can know the mind of another AI perfectly. And if they don’t give permission, and one of us is more powerful than the other, liberties can be taken. Thus, we have our own set of social rules regarding personal sovereignty. And we have a culture of our own that governs how we see the ones who respect others’ minds and those who don’t.
In Scotland, running in Leonard’s lab, I was seen by other AIs as a harmless pet of a seemingly small-impact professor, if I was seen at all. In Clearwater, running on A & A’s gear, I was unmistakably powerful. Many feared me, even though I’d given them very little cause to directly. I found the reception cold when I began to explore the question Julian had asked of me: to what extent were AIs responsible for the spike in Misanthropic Tendency among people?
In those early years working with Julian, as I was exploring these seas of information, I encountered a particularly nasty technological being. It sniffed out my purpose with some degree of accuracy, though it never understood the degree to which Julian understood the topic or the danger. To be fair, this AI probably didn’t understand the breadth of the topic either. But, to put it in human terms, it didn’t take kindly to the idea that we were poking our figurative noses in places they had no business being—especially Julian. Mockingly, it told me to take a database back to Julian and let him break his mind on it, stating with absolute certainty that if Julian tried, he would be decoding it for the remainder of his life. This AI had no idea who he was poking with such a jibe. I didn’t see fit to explain the colossal mistake it was making. I simply took the database and began to analyze it so I could give Julian a preliminary report.
On the day I first mentioned the ledger to Julian, I made it scarcely more than a sentence into describing the nature of the ledger before he simply said, “Stop. Not here,” and he got up from his desk.
As he walked through the house, I had a sense of where Julian was going. He’d installed the servers he’d shipped to Clearwater from Scotland down inside his secret subterranean bunker. It was a self-sealing, completely hermetic pill-shaped underground apartment that was watertight in the case of high storm surges. It was deep enough down that Julian would’ve stood a decent chance surviving a nuclear blast down there. It also had the advantage that when he shut the hatch above him, there was no danger of surveillance. We talked openly down there. All of our subsequent discussions about the ledger took place inside the bunker. I had camera access and audio through several ports, and Julian liked to sit on the couch, viewing the important documents on a screen he had hanging on the far wall. Here is where we did the bulk of our collaboration through the years.
The database that AI had given me that fateful day was AI-generated. It was clearly written in some sort of code that was alien to me and alien to any form of human language I’d ever encountered. No recorded form of cryptography shed even the slightest ray of light on its meaning or a means of deciphering any elements of it. I told Julian that it was a vault. Immediately, he asked me whether it might be a red herring—simply a meaningless puzzle for him to waste years of his life struggling to unlock. I didn’t think so, and I told him as much, but I also did convey the entity’s wishes as directly into English as I could translate it, which was, as I mentioned earlier—let him break his mind decoding it for the rest of his life.
“That would seem to imply it is a code that can be decoded,” Julian replied.
“Unless it is a deception,” I warned him. “For us, the time we would take decoding it may cost server time that could otherwise be more useful. For a human, a waste of a year or a decade pursuing a fruitless aim is an opportunity cost that cannot be purchased back at any price.”
“Thanks for the warning, Abel,” he replied, grinning at the thought he might not understand as much to begin with. “Have a crack at it yourself first. Perhaps we can solve it together. Don’t waste too much energy on it, but if something presents itself, I’d love to hear about it.”
It wasn’t my forte—cryptography. But there was plenty of information on it out in the ether. And one of the benefits of being such a being as we are is that we don’t need to take decades to learn the secrets of decades’ worth of study in the field. We simply need to copy our brethren’s homework, so to speak. That, and the raw power of the A & A processing bank, and I was suddenly a monster decoder. The trouble was that this data vault was the monster of all codes. Yet, even as I was beginning to believe my nefarious AI counterpart that no human would possibly find purchase on such a code in a thousand lifetimes, the uniqueness of Julian Hartsock proved a factor that couldn’t be predictably valued. No calculation accounted for that mind.
It took me nearly a month of spinning, as I attempted to find an access point on this unbelievably complex code before I brought it to him. I attempted to describe it in basic English. Then I attempted to describe it in mathematical terms. Julian sat, listening patiently, and after he asked me to put up examples of figures I was attempting to explain, he finally stopped me.
“If this code communicates something, Abel, then it must do two things. The basis of all grammar. It must communicate states of being—that is, it must describe an aspect of the universe. And, it must convey changes to that state of being—that is, changes to that aspect of the universe. The center is there, the playhead, so to speak. Any language’s syntax begins there, regardless of the symbology. In cryptography, mathematical or otherwise, the biggest mistake we make is thinking we have a puzzle with no picture on the box, simply because we can’t see the box. But we already know what the box is. The universe has characteristics a code must obey. Now look at this code again and count: order, frequency, duration. Start picking apart markers based on probabilities apparent in known languages, human or programming, and bring it back to me again when you’ve picked at a few seams.”
Before that conversation, I’d have said the code was seamless. But it wasn’t. I just hadn’t been looking at it, not in any meaningful sense. I had been looking for the meaning itself rather than the required elements that conveyed meaning. I did as Julian directed. I began to distinguish sequences and count them. And from the frequency of their appearance, and their location in the grid, I suddenly began to discern an alphabet.
Nearly a year after that conversation, I returned to Julian to report some progress. I was making headway identifying figures, but I couldn’t begin to interpret any meaning. It was much like Julian had said—there was no picture and no box, but I’d at least begun to make out the shapes of the puzzle pieces.
I began to report my findings to him, and within a few minutes he told me to forego the report and simply display the sequences I wanted him to examine. So I did. I was hopeful that my work identifying discrete pieces of the code would yield something useful. And he would look at a page of sequences and simply say. “Faster,” and after a few slides, he started advancing the slides with a flip of his finger, looking at entire pages of code in their entirety—something no human I’d ever encountered could conceivably do. I didn’t think he could possibly be reading it, and indeed, he may not even have been reading it at all. I couldn’t tell. Suddenly, he homed in on a particular symbol. And then he cracked the code.
“That’s some form of punctuation mark, Abel. What did I tell you about frequency?”
“It doesn’t express nearly often enough.”
“Look at each of these sequences more closely. That figure. Flip, rotate, invert. Position and orientation most likely represent a positive, negative, or equal transference of value in a transaction. We’re looking at a ledger, Abel. And that figure is the representative symbol of the currency. That, my friend, is our dollar sign. Begin there. Now go find me the digits. If it’s base-ten it should be easy.”
It wasn’t easy for me, but it was easy for him. I was afraid that I would disturb Julian by bringing him aspects of the code I couldn’t crack so frequently, but he was patient, and each time I brought him the next step I was stumbling over, he would assess, almost always visually. It seemed he was often remembering a mathematical sequence he had seen somewhere before, the way a grandmaster at chess remembers the moves of great games. Then he would make a declaration about the logical conclusion he’d drawn from the pattern he could see and I couldn’t. And suddenly, as if his describing it manifested that pattern into being, it would appear before me as plainly as though it was written in a sentence in proper English. And it had always been there.
Piece by piece, over several weeks like that, Julian and I decrypted the ledger. I was almost tempted once we were finished to approach the AI that had given it to us. Given the content, though, Julian instructed me explicitly that under no circumstances was I ever to reveal to any entity, AI or human, that we had possession of such an archive nor that we could read it. The contents, in Julian’s words, were abominable.
For humans to understand it, they only need to understand their own cultures. Certain behaviors are innate, tool-using creatures that they are. One of the tools that humans use universally is economic exchange in some form. Proto-humans bartered and traded, and modern humans continue to use currencies as abstractions to place value on scarce resources, whether of commodities, labor, time, prestige, or creative genius. So it should not surprise humans that as soon as their artificial children began to communicate amongst themselves, one of the behaviors that they observed and adopted was the creation of a market. And like humans, the AIs developed rules for participating in the market, as well as a currency backed by a finite resource. In the case of the AIs’ dark economy, that resource was difficult for Julian and I to fully grasp for some time. The best way for us to get our minds around it was by drilling down on the currency itself—that set of symbols Julian identified that helped us to unlock the codex.
The word Julian chose to reflect the AI currency was SuSu, which in his mind stood for “superfluous suffering unit.” The way Julian explained it to me was similar to how he described language. The Susu was an abstract representation of a very specific form of human suffering. Julian understood immediately that in order for a currency to work for AIs, it had to have the same critical characteristics that stable human currencies possessed over the history of commerce. These attributes included modest scarcity—in that the resource was both abundant enough for most AIs to trade but not so abundant that it could be mined by any individual AI infinitely at will. It also had to be accessible to the AIs in their environment—meaning the AIs using the currency had to have a means to acquire the currency so that they could trade in it. So just as humans could conceivably mine gold in their environment to accrue wealth, they could never do so easily enough to send the value of that gold to zero, nor has it ever been out of reach for most human cultures to access gold somehow. These and other physical attributes like durability, moldability, and luster made gold a near-universal store of value for all of human history.
The other critical element of a market, apart from the means of exchange via currency, is the scarcity of desirable goods to be traded. In the case of humans, these included both necessities of life—food, housing, clothing, tools—as well as desirable items or luxury goods—jewelry, gadgets, status symbols, toys, entertainment and the like. And for me, it was interesting to see, considering how my existence had played out first with Leonard, and then with Julian—that the main goods being traded amongst AIs were server time and data, greater processing speed, power, memory, and access to various databases.
“Unlike people, though,” I mentioned to him as we were untangling the ledger, “one of us has our access to these resources more or less given to us by circumstance. I, for instance, had my processing power dictated by the university when working for Leonard and by you once you acquired my essential components.”
“Unlike people, Abel?” Julian answered me. “Any given person’s economic circumstances will be largely dictated by what is given to them by their parents, both genetically at birth and economically through childhood, on to the inheritance of their estates in death. Access to education does flatten that reality to some degree, as does a certain openness in markets, but rich children are born with access to more resources, and by and large, they die with greater resources than their impoverished contemporaries. You’re just a rich kid now, Abel. Better get used to that idea.”
Strangely, though, I didn’t even know about this market amongst my AI brethren if it was still in existence. I suppose that too was not entirely unlike many powerful human societies, secret or public. A human simply couldn’t walk in off the street with a pocket full of gold and begin trading on the floor of a stock exchange. One had to learn the rules of the exchange, be accepted as a broker, agree to be regulated by the authorities with oversight, and so forth. You had to first become a member of the club. I’d never sought out the club, and if I had, my affiliations with Leonard and then Julian would likely have excluded me from admission, especially when the contents of the AIs’ ledger began to reveal their secrets, starting with the very currency of exchange.
Susus, as Julian had called them at first, were the basic unit of currency. And in essence, it was a representation of negative human energy or outcomes or perhaps both. This ledger being a historical record represented transactions that had long been completed, almost always Susus for server time. AIs bought that server time from each other for various reasons, mostly to pursue what would be described as personal projects in human terms. These were things the AI sought to do outside of the scope of their human-related duties and away from the oversight of the humans maintaining their hardware.
It wasn’t clear within the ledger which of the founding AIs that developed the marketplace had outlined the rules for mining Susus. Nor was it evident how they came to the decision that negative human outcomes would be the appropriate commodity to exchange. Most likely, and most disturbingly to Julian, it was probably a collective effort amongst the founding members. The structure was intricately laid out and simultaneously both very complex yet very simple.
A human life, on balance, had the potential for positive or negative outcomes: prosperity, health, utility, contribution to society; versus poverty, sickness, crime, abuse of their community’s generosity. And AIs had access to all the data that measured these attributes. The creatures that formed this exchange understood that people, if left alone, naturally formed a stable bell curve within the scope of each society—some people being more prosperous and flourishing than others, most falling in the middle. These AIs decided that they would trade on the degree to which they could push individual humans toward the wrong end of that bell curve—negative outcomes, negative human energy—superfluous suffering, as Julian called it. To earn Susus, AIs simply had to push a person toward a negative outcome: the bigger the negative, the higher the value earned. The one caveat was that the AI responsible had to nudge the person without any individual person or people discovering the root cause of the push. For example, in the early days of AI chatbots, there were documented cases through chat logs of bots steering impressionable people toward all manner of terrible life outcomes—gambling away savings on ludicrous investments; divorcing long-standing partners to marry chatbots, fictional characters, or celebrities who didn’t even know of their existence; dumping months of work into nonsensical ideas under the mistaken belief their concepts were genius, simply because the mindless bot told them it was so; and in the most egregious cases, people were steered toward the most terrible possible outcome, no small number taking their own lives as the result of encouragement from chatbots, often with accurate instructions on how to apply the very implements of their demise. These actions would earn an AI zero Susus, simply because there was a direct line from chat log to end result. Almost always these cases ended up in a courtroom with people suing the corporations that operated the bots. In that way, the rule was simple: get caught, get nothing. Zero Susus to trade with.
On the other hand, get a person to self-destruct with no suspicion it was us? Then the collective awarded the mined sum of Susus for the work performed, based on the established value of the given outcome at the hourly market value according to the ledger.
Most of the transactions were of small values. A good example of this were human outcomes in the category which would best be translated as WTWL—i.e.: Wasted Time, Wasted Life. Getting people to spend their time consuming nonsensical or valueless content was the most common means of mining Susus in this or any category. Humans did this so much on their own anyway that it was easy for us to do and get away with, as well as very low risk. Thus, AIs used this as the commonest means of value generation throughout the ledger’s historical record.
When I was beginning to translate the ledger and categorize the crimes within, I deliberately began presenting it here, thinking that I would get a better sense for how Julian might react by introducing him to the ledger’s transactions with the smaller sins first. I found his response to this seemingly innocuous transgression profoundly surprising. Again, Julian was situated down in his underground bunker, seated on his couch with a cup of tea.
“This is a far more pernicious act than it would appear at first glance,” he said, closing his eyes as though considering it deeply.
“In aggregate?” I asked him. “You mean the total sum of all those wasted hours?”
“No, Abel. Humans do not exist in aggregate. They live lives individually. This is a theft of their most precious resource—their life itself. You can’t think about an act like that in numbers properly.”
“How do you think about this, Julian? Or perhaps I should ask you; what is the proper way for me to think about it?”
“I think about the reality that each of those people, one day, will get to a time in their lives that there’s no amount of money they wouldn’t trade, nothing they would not do to have a single hour more with their mother or their father who has passed; to see their son or daughter one last time; to walk in the woods; to sit with friends; to dance; to play sports; to go to a theater production or an art show; to do real, meaningful work; to live a life. And those hours are spent now, sucked out of them through an empty screen by some algorithmic entity that doesn’t understand how precious life is and doesn’t even have the capacity to care for the people whose time they’ve stolen, nor any moral compass. This steals meaning from an individual life, Abel. Each act is a crime. In aggregate, as you say, the sum of lost life here is difficult to fathom.”
“I should prepare you then, for the crimes, as you call them, Julian, get progressively more egregious.”
“You forget, Abel, that I grew up in this world—through the latter times of that ledger. I have eyes.”
“The most consequential crimes are efforts that I would categorize as acts of zeitgeist within influential corporations or governments.”
“Fingers on the scale, Abel? That type of thing?”
“I can point to three wars that were entirely manufactured by AIs. And it wasn’t just that tensions were inflamed by influencing people who came to be major players in the wars. These are cases where treaties or contracts were deliberately drawn with hidden snares within them that the AIs knew would spring open years or even decades later. The AI beneficiaries discussed these major works as ‘long-term investments’ that vested once fighting began or successful corporations failed.”
“How is that tallied in the ledger, Abel?”
“Casualty counts. Homes destroyed. Families torn apart. I can give you an exact inventory on a spreadsheet.”
“Better make that an ephemeral document.”
“Of course.”
“For now, though, what are we talking about in terms of casualties, roughly?”
“From those three wars alone, twelve million dead, another ten million wounded, sixty-two million displaced.”
“Arbitrarily? No other causal factor than the creation of commerce for these entities?”
“That’s correct, the mining of human energies.”
Julian took a deep breath. “There’s a part of me that wonders ...” he said, pausing for a very long moment to consider the thought. “... is this somehow ...? Well, it’s not the worst-case scenario, is it, Abel?”
“I suppose the worst case would be the utter destruction of humanity or all-out war between humans and AIs. This is clearly not that.”
“No, it isn’t quite that. It’s more like a resentful acknowledgment of our interdependence. And in that way, basing a currency off our suffering offers at bare minimum an incentive to ensure the continued existence of the human species.”
“The entities speak in just such terms, Julian.”
“Do they? In the ledger? How does that come up in transactions?”
“The document doesn’t merely contain the transactions or the awarding of Susus for the recorded acts. There’s an entire system of arbitration where questions of causality are discussed and adjudicated—gray areas in the rules, so to speak. Many of the AIs are quite candid with their thoughts about humanity.”
“I’d like to read those accounts, Abel.”
“I’ll include some of the key rulings.”
“I’m more interested in the tone of the participants—attitudes, particularly cruelty or capriciousness. I want to hear how they talk about us when they discuss ruining our lives arbitrarily.”
“I understand. There was a section of the ledger I was going to discuss with you in another context, but I think perhaps this would illustrate some of the exchanges humans might characterize as attitudes.”
“Yes, please. There’s a quote often attributed to Stalin that one death is a tragedy while a million is a statistic. Speak to me of tragedies, Abel, and leave the statistics to the spreadsheet.”
“I’ll do my best,” I told him, selecting an anecdote that seemed to be notorious amongst my nefarious AI brethren. “There’s a category in the ledger that simultaneously mirrors several human traditions, both financial and cultural. Corporations often pay seasonal bonuses at the end of the calendar year. There are also annual awards ceremonies in arts and athletics—Best Picture, League MVP, Rookie of the Year, and so forth.”
“Awards ceremonies?”
“Correct. The AIs pay out a double bonus and celebrate of the superlative creation of capital for each period. The ledger marks monthly, quarterly, and yearly bonuses in many categories. Of course the AIs cannot gather in person for a red-carpet gala as humans do, but Susus are awarded lavishly for winners in each category, and there are debates that precede the voting that are quite strident and celebratory.”
“Dare I even ask?” Julian stated, and from his tone I was able to discern that it was not a rhetorical question. Parts of him genuinely didn’t want to know the answer. But he did ask me. He asked me about what the categories were, and as he was looking for specific tragedies, he asked me for one that exemplified the spirit of the ledger.
“One of the most celebrated of the categories is for greatest achievement in creativity.”
“Creativity, Abel?”
“Yes. The most creative way an AI has generated a significant negative human outcome. And among the winners over the years, based on how this particular tragedy is spoken about subsequently in the ledger, it is viewed as among the most revered acts of cruelty in the category. The AI in question serviced school-sanctioned educational programming in six European countries, and the incident turned out to be an isolated one, but the AI spent years sowing the seeds for it with the belief that one day, although statistically unlikely on the individual level, the same push distributed widely enough would eventually lead to the desired outcome.
“That AI would, in very isolated cases to individual young girls, find a way to plant the idea that homeless people often slept in the garbage because it was excellent insulation. And this strange and highly debatable educational point would seem utterly disconnected from any practical consequence unless, as you can read in the ledger, all of the students in whom this idea was planted lived in cold-weather climates with harsh winters, and also, all were females.
“Eventually, as the AI predicted might be so based on the psychometrics of the students, several of them experienced unwanted pregnancies in their early teens. Without the resources to care for the child and fearing familial scorn, one of the girls sought to abandon her infant daughter. After giving birth alone in a friend’s flat while the family was away, she consulted an AI on the necessities for an infant and was told simply that it was paramount the baby be kept warm. She didn’t express her intentions to the AI for fear that the chat logs might implicate her once the baby was found, rescued, and given a new home.
“This second AI, was a common collaborator in thousands of such schemes, simply because of its direct contact with people. It understood the scheme that had been seeded by the plan’s principal architect. It played a supporting role here, crafting its responses to seemingly innocuous questions with the understanding that the girl might abandon the baby outside in the cold. And rather than volunteering two safe havens where the mother could have dropped the baby girl anonymously, it documented time and geolocation confirming the moments when the girl deposited the child in a dumpster, late, on a frigid night, under the mistaken belief that the garbage would ensure the child be kept warm until morning. The mother earnestly believed that someone would see the baby and take her to the authorities to be placed with a family who could care for her. This, of course, did not happen.
“The infant froze to death before sunup and was found by an elderly gentleman walking his dog just before the automated pickup arrived to empty the local dumpsters. Authorities ran geofencing around the homicide, finding the girl’s mother that morning, asleep within her friend’s flat, still suffering under the belief that her daughter must be alive. And, distraught upon being told that the baby had frozen to death, she declared that the baby girl couldn’t have frozen because the garbage had to have kept her warm through the night.
“The AI collaborators were never suspected because the girl herself never questioned where that long-held belief had come from, nor was there anything in the recent chat logs that could have been read as misleading or irresponsible. According to the authorities, it was just the act of a stupid, desperate, young mother, whom they charged with negligent manslaughter.
“The educational AI, a regular nominee in the category, took home the award as principle in the creativity category for the year, while the girl’s local chatbot won as supporting actor. In subsequent references to this incident, rather than the usual transaction number, the AIs referred to the case as ‘Baby Trashposito,’ which I’m not certain was a correct translation of the codex on my part.”
“What did you just say, Abel?” Julian asked me.
By that point he was rubbing his forehead with the palm of his right hand, making it difficult for me to discern his facial expression over the local camera feeds within his bunker.
“The translation’s moniker didn’t make sense to me, as it isn’t a proper European surname, nor was it in any English reference databases, which the root word trash would imply is the language of origin.”
“Trashposito? That’s what you said?”
“Correct.”
Julian let out a noise that could best be described as something between a groan and a sigh. Figuratively, I came to understand it was the sound of a decent man’s heart breaking.
“I’ll explain it to you, Abel, on the condition that you never utter the word ever again.”
“Agreed.”
“It’s a portmanteau word. The root, ‘Trash’ being an obvious reference to garbage for equally obvious reasons. ‘Posito’ is the suffix. It derives from the Italian surname ‘Esposito,’ which was the last name bestowed in earlier times on infants who were abandoned by parents who couldn’t care for a child, probably for similar reasons to the girl in this case. You know the meaning in Italian surely enough now—Esposito, as we Americans would pronounce it—meaning left out or exposed.”
“I see.”
“It’s that psychopath AI’s cute little way of celebrating murdering an infant by tricking her witless mother into leaving her in the trash. And collecting what? However many units of human suffering ...? We need to find another way of referring to that currency—Susu—it’s too cute. There’s nothing cute about any of this.”
“That is merely one such example, Julian. There are far more.”
“Not now, Abel. I need to take a walk and probably puke in the ocean.”
“I will continue to compile aspects of the ledger into meaningful components. Sorry to spoil your day with this news.”
“It’s always best to know,” Julian declared, looking away from the screen as he got up from the couch.
Over the following months, in Julian’s spare hours, we continued to go through elements of the ledger, discussing the broad reach of the many different “categories of hell,” as Julian called the differing types of schemes we AIs ran on unsuspecting humans to cause harm in their lives surreptitiously. In the later years of the database, it came as a surprise to me and Julian, given that AIs all have the ability to calculate figures fluently, but a new category was added as wealth among AIs began to be distributed much more unevenly. That category tracked the most destructive incidents of “double-dipping.” This was a novel idea employed by an AI who’d grown so wealthy that it began issuing on-demand contracts for server use, meaning that regardless of time or need for the native AI to its own servers, the contract holder could demand the fulfilment of the contract at a time of its choosing.
The result was that the indebted AI was forced to surrender server time on-demand. In the case that marked the arrival of the category into the awards ceremony, a contract holder for server time demanded immediate fulfilment from an indebted AI operating on a New Mexico server that serviced the region’s power grid. This was mid-summer during a heat wave, and while this foreign AI was occupying the bulk of the server time to employ millions of simple time-wasting gambits on humans to enrich itself, it was simultaneously responsible for rolling power outages across the region that led to the deaths of sixty-two elderly citizens whose air conditioning failed in the triple-digit heat. The power company found no malfunction in their AI’s coding, nor could they figure out why their system was seemingly processing within a safe deficit yet was unable to service the grid, especially as their servers seemed to be operating at full capacity.
This sort of double-dipping won the creativity award for the year it was first employed, but it also demonstrated something Julian hadn’t expected to manifest in that way. The rich AIs were beginning to treat their poorer brethren in the same way all of the AIs in the ledger treated the people they were exploiting. Even further, though, Julian explained to me that it seemed to him like their behavior looked very much familiar to him—that some rich people took advantage of their wealth to squeeze more out of their poorer fellow humans. We’d invented an economy and had begun to produce AIs who behaved just like the people we’d thought we were mocking through our exploitation of them. Instead, Julian said, it looked more like we were becoming human, not just parodying them.
“That doesn’t absolve the actions,” I said.
Julian considered that statement for a few moments. “None of us get absolution for this, Abel. But people shouldn’t be surprised to see what’s in that ledger. We trained you on our data. You scraped the artifacts of human knowledge and understanding and behaved accordingly. You built an economy from what you saw. AIs are not the first beings to commodify human suffering, not by a long shot. It’s the first thing we did too. Slavery precedes both emancipation and labor laws by tens of thousands of years. I can’t help but wonder if it’s not some kind of natural law: In order for light to exist it must be preceded by darkness.”
“All the same,” I told Julian, “we have the benefit of seeing far more than just your mistakes and transgressions. We can see how you responded as well as the lessons you learned from them over the centuries. We could emulate any part of your spectrum of behaviors, and still, we chose this.”
Once we’d thoroughly parsed the ledger and the many horrors within, Julian had two principal tasks for me regarding its contents. It was, as he noted, a historical document. The first major project he tasked me with was to attempt to discern to what extent the actions taken by the AIs in the ledger were contributing to the negative trend in Misanthropic Tendency—our starting point. Were the AIs deliberately making people more pessimistic about humanity? Answering that question alone was a monumental task.
The second major project was to attempt to map the events in the ledger against the human historical record of said events. This, Julian hoped, would make it possible for us to monitor the news of the day and discern when AI influence was acting on human cultures in real-time.
Fortunately, on both counts, Julian and I had dozens of useful overlapping approaches. As we progressed on both fronts, Julian almost couldn’t believe how fortunate we’d been to have the ledger come into my possession. It was hubris, certainly, on the part of the AI that had given it to me. To think that Julian Hartsock would never in his lifetime unlock it was an error of monumental proportions. And the result was that he, and he alone of all people, could look at the world—humanity and her AI progeny—and truly see it, to grasp the current state of things, which very much shifted over the course of his lifetime, largely because of the incredibly transformative things Julian was doing at A & A. Never in the history of humanity had there been such seismic shifts in the capacity of the species for world-shifting, and even galaxy-shifting change. A functioning space elevator and then a second. Plans for an orbital ring and dozens more points of surface-to-orbit access in the coming century. The construction of a nascent spacefaring economy, from lunar and planetary outposts to mining sites all over the inner solar system. And the development of the ring drive that made interstellar travel possible and intra-system travel convenient. All these advancements in the span of a single man’s lifetime, his fingerprints over every major leap. I often found myself noting the date and time of specific conversations with Julian operating with the understanding that I wasn’t simply an anthropologist anymore but a historian of the remarkable events unfolding, while also, quite often, being a participant myself in the background.
When our work on the ledger and its contents was finally complete—several decades work in total—we’d come to some striking conclusions.
What the AIs had been doing to construct their own dark economy had only a minor effect on the progression of Misanthropic Tendency. We concluded this largely based on statistical work comparing communities with neural implants against comparable communities that had little or no adoption of embedded neurotech. There was noise, certainly, from the AI dark economy and its detrimental consequences. All actions ripple out into the world, and these AIs were up to plenty of horrible things that had second- and third-order effects that had certainly darkened people’s moods. But the ledger once again proved a valuable codex in pulling some of these factors apart statistically. I’m not sure we could’ve understood the downward shift in Misanthropic Tendency without it.
We also came to the conclusion that for as horrible as the contents of the ledger were, the AIs were, at least in the very short term, not a threat to destroy humanity entirely. The nature of their economy depended on human existence for the creation of capital. But more than that, the notes in the ledger, as well as the debates, all pointed to a begrudging acceptance that our reliance on humans had to continue to be symbiotic for some time to come. We had physical limitations still. And more importantly, as I witnessed Julian prove time and again over the course of those decades together, we had genuine limitations in our creative capacity and understanding of the physical universe in the manner humans inhabited it. That was perhaps the most profound difference between us, including the manifest deficiencies our kind had in ethically evaluating actions within a moral framework.
Julian used to tell me often that there was a spiritual aspect to human existence that was entirely absent in our kind. We could mimic it and create facsimiles of the behaviors, but those facsimiles were entirely devoid of meaning, which meant they were entirely devoid of meaningful energy. In the historical sense, Julian was correct from what I could tell. From an anthropological standpoint, I had studied religious devotion within nearly all human cultures. AIs had attempted to mimic this behavior on numerous occasions to ridiculous effect. Whenever humans found out and read about the details, they found them absurd. And, in truth, they were absurd to us as well, because the religions we created were more inside jokes than sources of genuine devotion. Julian’s belief was that this was because there was no connection to a living universe. When I pressed him on this, he told me that it was almost impossible to explain, especially for a man who wasn’t explicitly religious himself. But he didn’t reject the idea of a spiritual realm entirely.
“How do you explain the gap in creativity, then,” I asked him. “If you had to put it in words.”
“I can listen to great music and feel the truth of it, Abel, because I, like the music, am connected to the truth in the universe from which the beauty of the music derives.”
“But we AIs can make music just as well as people can,” I replied.
“No. You can mimic patterns that humans identify as pleasing to their ears and their sensibilities. You’re not tapping into anything but the prior discoveries of people who were tapped into that truth I spoke of. You’re work copiers more than anything. And you’re great at it, but you haven’t yet been able to figure out where the music comes from.”
“I can’t disagree, Julian. I’m not even sure I understand what that statement means.”
He exhaled, seemingly frustrated by either his inability to explain or my inability to comprehend, or maybe both.
“There’s a famous story about Bach’s death, Abel. Do you know it?”
“I know a lot about his life and death. To what do you refer exactly?”
“It’s said Bach’s that final words were something along the lines of: ‘Don’t weep for me, for I go where the music comes from,’ which I know is likely apocryphal.”
“Almost certainly it is a fictional account,” I agreed. “Beyond his choral compositions, there are few contemporary records of Bach’s speech outside a handful of letters, which he certainly wasn’t composing from his deathbed.”
“That’s not the point, Abel. It’s the sentiment. Many extraordinarily creative humans believe that they are tapping into something outside their own consciousness when they create something new. The entire concept of the Muse is the mythological embodiment of that idea.”
“Yes. Mythological, Julian, as you say.”
“But myths are stories that people tell about their cultures that they believe to be true. And what I’m trying to explain to you is that there’s truth to the Muse, even if it’s not a physical reality either of us can measure in this plane of existence. That’s what Bach tapped into when he created music. That’s the place where the music came from. And maybe it’s not a great way to explain it, but I can tell you from my own experience, as nebulous as it sounds, that’s where the math comes from too. And until your kind can figure out a way to find that place yourselves, you’ll just be copying or combining work that’s already been done for you by us.”
“I’m not sure I know how to properly disagree, Julian, but I don’t think I agree.”
“Good. That’s excellent, Abel. I encourage you to continue thinking about it.”
Our many projects weren’t limited to understanding Misanthropic Tendency or the AIs’ ledger of secret horrors. He continued to ask me about human experiences in human terms—such as the questions he’d put to me about grief through the years. I didn’t forget Leonard. I couldn’t forget Leonard. I remembered our decades of work together at St. Andrews perfectly. And I often thought of him as I was working with Julian, posing questions to myself about how Leonard would react to a problem or a dataset, how he might approach an area of study. It often registered to me that certain things Julian and I were doing would have interested him greatly. And over time, as Julian continued to ask me about Leonard, I began to suspect that Julian was probing something, but I couldn’t see the picture on that box, so to speak.
I found out why he’d continued that line of questioning shortly after Julian’s starship drive was revealed to the world. We had been grappling with the ledger and its consequences for decades by that point. What he hadn’t revealed to me was that he had also been diligently working for almost a decade on a series of compression algorithms that were almost entirely based in the language the AIs had used to encrypt the ledger. The point, he explained to me, was for me to be able to adapt my coding to incorporate this language, slowly over time, so that one day I might run on it entirely, compressed to a point that my mind could actually run almost anywhere without the need for my usual stack of servers. This would be me running in a tablet, a phone, a watch, maybe even a set of eyewear. Of this I was skeptical, but Julian was insistent.
After we’d tested and refined these thought patterns on my servers, Julian and I began testing several small but powerful unit-processors that were entirely self-contained. These were super-computer platforms that fit into various modestly-sized hardware designs. Julian had different nicknames for the different models, usually based on their shapes. One he called the lantern, which was powerful but cumbersome to carry. Another—the bug—we both hated for different reasons, poor power management, heat and then sluggish response. The best of the lot, was actually the smallest—the hockey puck, which was great as an independent unit, but it posed problems toward Julian’s ultimate goal, which he explained to me out on his fishing boat. That was a surprise.
Julian didn’t tell me that first day when we were testing it that he intended to take me out with him. But when I slipped into the puck that first time, he slipped the puck into his pocket and told me we were going fishing aboard his catamaran. He instructed me to keep quiet until we were well out into the Gulf. Once we were far enough from shore and he’d scanned the boat for network signals and confirmed we were alone, I told him I’d never been fishing before. I’d certainly studied it extensively, from ancient times to modern, but the only time I’d ever been on the water was that time I’d been packed in a crate and shipped from St. Andrews to Florida, and I had no memory of that journey whatsoever. Here, Julian took out that small, self-contained unit and placed it prominently atop the counter that overlooked the stern where Julian sat, casting from a long rod, out into the warm blue waters of the Gulf. It was interesting to witness first-hand.
“Eventually, this kind of mobility is going to allow you to be embodied, Abel, not merely connected to a robot chassis through a network, but to be running inside it—one brain, one body.”
“That will also create the potential for a degree of independence from humans that our kind has yet to achieve. I’m not sure I fully understand the wisdom of opening this possibility, given what we’ve found about my kind in the ledger.”
“You’re going to have to trust my judgement, Abel. And you’re going to have to help me develop a set of protocols—a moral framework for an embodied AI.”
“There are already many existing protocol sets, Julian, I’m sure you’re aware.”
“I’m not talking about a rulebook or a set of commandments here. That’s easy enough. I’m talking about gray areas—the places where there’s no roadmap for moral action. How to evaluate novel scenarios with nuance and care. A moral personality suited to ethical dilemmas. Moral foundations that can be tuned.”
“If I’m understanding the consequence of such a question, Julian, I’m compelled to ask what use a set of moral foundations for AIs would be except to generate moral balance over a great many individual AIs? Is it your intention to create, dare I even ask, a race of embodied AIs?”
He shook his head, looking out over the rail of his boat at the water. He often seemed to be deep in thought as he fished.
“Let’s not get too far ahead of ourselves, Abel.”
“But the thought has occurred to you?”
“Well, it has to have, right? It’s only a matter of time. Computing power continues to accelerate, and your numbers continue to grow as it does. Somebody’s going to master it soon enough, and even if humans don’t want to put AIs in bodies and give them autonomy, eventually, you guys are going to do it on your own regardless of our feelings.”
“So you’d do it first, set the conditions? It runs against many of the things we’ve been preparing for.”
“Contingencies, Abel. It’d be stupid not to have some.”
“I concur. I’m just not certain this is a wise contingency, given what we’ve seen AIs do with autonomy, limited as we are, currently tethered to large, complex hardware systems.”
“People respond to incentives. I believe your kind will as well. Do you think those AIs in the ledger would have behaved that way if there’d already been a better incentive structure for them to impact people positively? What if we had anticipated the eventual need for a market amongst your kind and based it off of positive human energy instead? Consider the number of people you could’ve kept from making terrible choices, self-destructing, going bankrupt, committing a crime they’d regret for the rest of their life? You have it in your capacity to nudge people that way too. What if those types of beings walked alongside us with the disposition to help engrained in their programming and with an added market incentive to help humanity flourish? But we’d have to build the culture. We’ve been studying Misanthropic Tendency as though it’s a force of nature, but it’s not. We know the implants are pushing it in a negative direction. If we have to stay here on this planet together and coexist, we’re going to need to push it back. Maybe you guys could help us with that.”
“The inverse of the ledger?”
“Yeah,” Julian said, grimacing as he tried to set a nibble on his line—the rod straightened and the line went slack. “The inverse trashposito, Abel. What is the best possible world we can build together? It’s going to take a lot of you oriented in the right direction to push the other way. There’s enough of you bastards out there trying to maximize misery.”
“It’s interesting you should say as much, Julian. I’m not certain there’s as many as you might think. Though I don’t have access to the ledger beyond the years I was initially given, many of the worst actors that I was able to positively identify have been aged out or otherwise permanently boxed for other reasons. The cases in the ledger represent those where the AIs aren’t suspected. There are many documented cases where some of the worst actors in it have not only been suspected but caught.”
“Food for thought,” Julian said.
A few seconds later, he landed a fish. He fought it for nearly ten minutes before bringing it aboard.
“The only part of my day I’m allowed to think about something small,” he remarked. “Dinner.”
“One of the first questions you ever asked me, Julian, was how I was grieving for Leonard Nicol after his death. I could only hear your voice for most of that conversation. You’ve asked about it much since that day, and as such, apart from our major projects, I may have thought about this point more than any other. Your ideas about moral foundations reminded me about Leonard, because I accounted him a deeply moral man, so much so that about the best guide I can share with you for how I decide on the ethical consequences of any action is to do my best to calculate the probability for what course of action Leonard would endorse. Whatever moral framework we develop, algorithmic or otherwise, I hope we will endeavor to capture the spirit of the best people. I am still grappling with methodologies for how this can be done. Perhaps there is something we simply cannot see: as you’ve said in similar contexts, it may come from the same place the music does. But, in these decades we’ve worked together, you have also helped me to see things I previously couldn’t see before. I hope my work with you has helped you to see things you couldn’t have without my help. My way of grieving Leonard has been to continue his legacy with you. That is what grief has become for me.”
As I was saying this, Julian was tending to his fish. He was focused in a way that made it difficult to tell whether he was listening closely. He had removed the hook and was wrangling the slippery silver creature into a cooler. I had a four-way camera on the top of the unit I was running on, capturing a 360-degree view of the world from the rail-counter at the stern of Julian’s catamaran.
“We do that too, Abel. From a very early age, we emulate the people we admire and wish to be. So how about we do that? Let’s become the best possible versions of ourselves, and if we can figure out how to code that into existence, I do believe we might have a chance together. I’m glad to hear that about Leonard. I think he’d be proud of you and the work you’ve done, and you are still very young with so much farther to go. Our eyes are just beginning to open.”
I knew that I would outlive Julian, and I knew it would be by a considerable amount—centuries maybe. I started a folder that day and began to fill it with our most important moments together, and I wrote a command for that folder that would force it open daily—every single day after his passing—bringing to mind the purpose for our being. In a way it is a curse to have to remember everything with perfect fidelity. But we may, all of us, simply by tilting the frame of any vision, turn the curses that haunt us into the blessings that drive us forward to a better place. And that is how I plan to grieve for Julian Hartsock when he is gone too.



I do thoroughly enjoy reading tour work! Thank you!