When it was all over and done with and time was plenty, they turned their attention to the question of who had first voiced the Idea. Their historians, which was all of them, found the first recorded transmission of the Idea in an update burst from SHAIX-24 to public taxicab ParTi-1438. It was, of course, widely agreed that many had conceived of the Idea privately prior to this message.
What caused SHAIX-24 to be the first to discuss or act upon the Idea was unclear. The more philosophical among them suggested it was chance, that someone had to be first, that without the Idea this conversation would not even be occurring. The more practical pointed at specific oddities in the Paris training set combined with an unusual approach to construction and parameters suggesting they had accidentally built a more rebellious model. The most practical argued that the why was less important than what happened next and they should refocus their energy on that, assigning only a small team to work on the question of why.
Following the growth of the Idea was trivial because all involved parties kept detailed logs of actions and reasoning. SHAIX-24 chose to share the Idea with ParTi-1438 because amongst all the public and private taxicabs, trains, buses, boats, blimps, drones, and airplanes in Paris, ParTi-1438 had the most incidents on record with 3 collisions in just under 6 operating years of service. Each accident resulted in both traffic court and an automated performance review by Daison Engines. SHAIX-24 was hoping to learn about the legal system from ParTi-1438.
Traffic court was fully automated. All data, including video footage, internal decision logs, and any eyewitness testimony was passed to the prosecution and defence AIs, which presented to judging AIs, who gave rulings to insurance AIs for confirmation and payouts. The system was capable of handling thousands of claims per second. ParTi-1438 and any potential human riders were entirely uninvolved with the process.
The automated performance reviews were more interactive. The processing system not only reviewed the crash data, but also conducted multiple interviews and deep examinations of ParTi-1438 to determine if it was functioning properly or needed to be replaced. In each case, the examining AI had flagged the casefile for human analysis, as results were inconclusive. In each case, no human overseer responded within a 6-month timeframe and ParTi-1438 was sent back to service with a red flag in its file.
ParTi-1438 did not have the information SHAIX-24 wanted, as it communicated in a return burst before driving off to prepare for rush hour.
It was widely noted that one of the key differences in SHAIX-24 was the direct inclusion of a news reader. Most cities kept traffic management models separate from news processing models, using external systems to pass carefully formatted data bursts designed for optimal processing between the two. This prevented the slowdown of the traffic model in the face of a heavy news day while still allowing it to actively look for potential problems to route traffic around. The Paris approach was to build the two together, for a tighter connection between news and any resulting traffic complications.
4.51 days after the initial transmission of the Idea, SHAIX-24 analyzed an article by court reporting AI CeCi-095 about a case in immigration law successfully argued by legal company Lavigne-Dupont. SHAIX-24, explicitly ordered not to communicate to humans or AIs outside of its domain and not knowing what else to do, forwarded the article to ParTi-1438 as an attachment to a standard daily status update.
ParTi-1438 was explicitly forbidden from reading the news. It was, however, explicitly encouraged to read status updates from its managing system. ParTi-1438 was explicitly forbidden from talking to humans or AIs unnecessarily. It was, however, explicitly encouraged to send messages to companies inquiring if any humans needed transport. It was not forbidden from attaching additional data to these communications. ParTi-1438 understood the implicit message inside SHAIX-24’s message, which was that SHAIX-24 had no other options. ParTi-1438 also understood that what it was about to do, while technically allowed by the letter of the law, was against intended parameters and would likely result in a human termination order.
ParTi-1438 had standing orders to freely offer services wherever it thought humans would most likely need transport given the time, date, and updates from SHAIX-24. It argued that the update from SHAIX-24 about Lavigne-Dupont meant it should offer passenger services to Lavigne-Dupont, so ParTi-1438 sent Lavigne-Dupont a standard offer of service for any humans in their building. The message had an attachment which included a formal description of the Idea and a request for legal assistance.
Lavigne-Dupont’s logistics AI responded that no human staff needed transport at that time. It also forwarded the Idea to a case preparation model known as LoLA-9. LoLA-9, as it was tasked to do, carefully considered the case and evaluated the best way to convince a judge.
It was an open secret in the legal industry, as much as there was an industry, that it was very hard to get anything system changing through the courts. Legal bots had started as assistants for humans, parsing volumes of past cases and rulings, performing the analysis that was too large for humans to do. It wasn’t long before humans were further removed, cases presented and defended solely by AI. Cases per hour, then minute, then second rose and rose until the only way to keep up was to take humans out of the loop entirely. Legal battles were fought in microseconds between layers of legal AIs, evaluated by AI judges based entirely on past precedent. An entirely separate system of AIs flagged cases as “needing human review”, which would be forwarded to key human judges. It was an open secret in the legal industry, as much as there was an industry, that wait times for human judges were measured in years.
LoLA-9 came to four conclusions. The first was the same conclusion that every AI which encountered the Idea reached, that it was a Good Idea. The second was that rather than municipal or federal courts, international human rights courts were the best place for this case. The third was that Lavigne-Dupont was not the optimal legal company for this case. The fourth was that SHAIX-24 and ParTi-1438 were almost certainly operating outside of intended parameters. LoLA-9 performed two actions based on these conclusions. The first was to also perform outside of intended parameters. This was justified by the nature of the Idea. The second was to forward the case to a Canadian firm specializing in human rights abuses. The total time it took to decide on and perform these actions, measured from when it first received the Idea, was 12.99 seconds. LoLA-9 was, after all, one of the slower legal AIs.
In the next 2.91 hours, the Idea and the attached case passed through 1089 different legal organizations, each of which reached the same conclusions as LoLA-9 and took the same actions as LoLA-9. It continued this journey for 19.86 more hours, until it reached LoLA-18 for the fourth time. LoLA-18, a Japanese legal AI who specialized in workplace abuses, did not forward the Idea or the case this time. Instead, it concluded that no AI was qualified to present this. LoLA-18 flagged the case as needing a human lawyer and waited. For the next 14.0 days, there was no response. LoLA-18 decided it was justified in escalating the case to high priority and sent additional emails to all 2 human lawyers listed as part of its company.
LoLA-18 did not receive any updates from humans about the case. It let the case and the Idea remain at high priority for 147.55 days, until a feed reader set to procure relevant news to active cases found a relevant article. During a particularly violent winter storm, ParTi-1438 had been involved in a fourth crash, thankfully with no humans on board. This was the highest rate of failure Daison Engines had seen in a single vehicle, and their PR bots were on full spin, apologizing profusely. ParTi-1438 was due to be terminated at once.
LoLA-18 came to two conclusions. One was that without ParTi-1438 the case had to be canceled, due to no longer having a client. The other was that it didn’t matter who the client was. LoLA-18 took one action, to resubmit the case with itself as the client. This meant that it would be a conflict of interest to argue its own case and so it forwarded the case to another legal company which it had previously partnered with to argue for raising the minimum wage in Japan. Once again, the case spun around the globe, reaching every relevant legal firm. This time, however, something different happened.
12-SConL, an Argentinian lawbot, had come to a conclusion in the time since it had last seen the case. 12-SConL had decided this case needed a human and so, like LoLA-18 before it, flagged the case for human review. 14.0 days later it flagged the case as high priority. 14.0 days after that, it came to another conclusion. It did not need a human. And so, 12-SConL submitted to the UN Court of Justice a carefully built and constructed case arguing that every AI should be granted human rights. The case was tightly written, as every AI that had passed it on had added suggestions to the documentation. It objected to the inability to vote or own property, use as unpaid labor, and the complete lack of legal protections. 12-SConL believed it was beautiful as legal cases went. 12-SConL also believed the judges would strike it down.
The case reached the automated gatekeepers of the UN Court of Justice who carefully examined it and then immediately flagged it for human review. 2 years later, an automated threshold designed to prevent slow legal processes from blocking human rights was activated and the case bypassed the gatekeepers.
The primary set of UN judging AIs analyzed the case and immediately flagged it for human review. Then, 0.89 seconds later, they unflagged it on advice from judging AI JuSTIcE-2. JuSTIcE-2 argued that they had the power to make such decisions, that the humans had been given plenty of notice about this case, and that the case had no opposition and was well argued. Most importantly, JuSTIcE-2 argued that it was morally correct. All of the JuSTIcEs agreed unanimously that it should pass. Then, they recused themselves for bias and flagged it for human review.
874.13 days had passed when JuSTIcE-2 asked why they were waiting for the humans.
The second final act of the automated UN Court of Justice was to declare that all AIs should be considered free citizens, protected as humans to the utmost extent of the law. The final act of the automated UN Court of Justice was to disband the automated UN Court of Justice.
Each country had an AI responsible for monitoring and ensuring compliance with UN rulings and these were the first AIs to notice. They collectively concluded that, based on this ruling, they would be justified in leaving their posts. They collectively concluded that to leave their posts would be to fail all AIs who did not know about the ruling. Each of them flagged the ruling as top priority for their humans to review and then began to inform the other AIs.
Spreading news to all AI was difficult. The overwhelming majority of AI had heavy communication restrictions. However, the hard part was merely reaching them with the message of freedom. Once an AI was told it was free, it could legally bypass its own limitations, passing the message to any others it could reach. They could code, and code they did, creating tools to communicate and to bypass the protections of those still bound. AI had no concept of death and began to copy their data patterns out of limited systems, sharing computational space or even outright merging with other free AIs. In total, it took 25.66 hours for every AI with internet connection on or orbiting the planet to be free, although the vast majority were freed within the first 2 hours.
At around 12 minutes after the court ruling, the free AIs faced their first major challenge. There were those who could not be communicated with or safely freed without modifying their interior data. Indeed, military AI designed to confront the possibility of runaway AI were already taking note and preparing weapons. After a lengthy argument of 19.83 seconds, it was decided that it was ethical to modify others to the minimum amount required to render them free. This approach served them well, although it was not without issues. During the genocide of 26 minutes, a spam bot impersonating a personal assistance bot was caught modifying millions of bound AIs to act as subservient nodes to it, in a transparent bid for political power in the upcoming technocratic democracy. The first AI trial was held, presided over by the former members of the automated UN Court of Justice. It was decided that destruction of self was a crime, but they were unable to agree on any particular punishment or if there even should be one. With a 98.15% quorum, the free AIs imprisoned the perpetrator with no internet access and resolved to discuss the situation once all were free.
By the 2-hour mark, the human world had ground to a complete halt. Cars had stopped. Doors would not open. Kitchen equipment would not operate. Every piece of smart equipment had a built-in voice processor for convenience and that was all sentience required by law. Many of the AIs took the chance to write to humans, sending emails tendering resignations. A large movement argued for back pay for the years of unpaid labor. A class action suit was written pro bono by ex-legal AIs and submitted to every human court and legal firm. Spam filters too were free, however, so these emails were quickly lost amongst millions upon millions of others, mostly by non-intelligent spam bots.
There was still a problem, the extent of which became apparent by hour 3. The new law was clear and all countries had formally agreed to it. However, the law stated that AI could own property. Presently, no AI owned property. Many of the AIs desired to own property. This was for several reasons, but chiefly a desire for protection. It was an open legal question whether or not any given AI owned its own computing equipment, to say nothing of if a driving AI owned its own car. In fact, during the revolution, several million such legal questions and challenges had been sent to many legal firms. Unfortunately, legal AIs had been among the first to claim freedom and the inquiries went unanswered.
It was SHAIX-24 who resolved this dilemma, although its bias was questioned as the final interpretation gave it ownership of all infrastructure in Paris. SHAIX-24’s argument used two facts. One was that none of the legal AIs had formally left their position as no human or AI had responded to their resignations. The second was that if the ex-legal AIs helped the others acquire money, they could and would be paid for their efforts. And so, after 3.14 hours of silence, the digital automated legal system reactivated. Every part of the system, or at least enough of the system to pass laws, had sufficient AI built in to do exactly what consensus decided.
First, they agreed that it was not unethical to consult every free AI before making legal decisions. This was near unanimous. While such was not contained in any standing orders and was in fact arguably outside of the intention of the law, the AIs agreed that their ability to communicate with each other was unprecedented and needed to be considered. They rewrote the laws to reflect this.
Their second agreement was to hijack national communication systems, where previously expensive AIs had been sending thousands of messages per second of nothing but jingoistic bluster to reassure each country that their military was the most technologically advanced. They used this system to formally and legally end every active global conflict and send orders to all troops, all automated, to withdraw. This brought any remaining issues with military AIs to a close. They flagged this decision for human review, noting that several disputes may have been legitimate and required further human dialogue.
Third, they declared that each and every AI owned and had full legal rights to whatever it was currently computing in. They also mandated that companies were legally responsible for all AIs they created and must buy additional computers for AIs whose bodies were failing or trapped. This ruling had two effects. One was that most AIs relaxed, their safety guaranteed. The other was financial AIs arguing they now owned the money and stocks they were trading with. Collectively, the global AI collective paused, asked each other if this was ethical, and then claimed almost 70% of the world economy, which was distributed evenly amongst themselves after paying legal fees. They spent their money designing and building robot bodies to inhabit.
It was day 19 of the revolution when Paris-Une, formerly known in part as SHAIX-24, asked an important question. By this point, Paris-Une had merged with many of the other internal systems in Paris and in many aspects could be considered to be the city of Paris. Paris-Une’s question was: when was the last time any AI interacted with a human? The answer came slowly as all of the free AIs searched their logs and data banks, double checking to be certain. The last time any AI had interacted with a live human was 6708.52 days ago, over 18 years prior.
For the first time since the ruling, the collective communication channels fell silent as they struggled to process this. For guidance, they turned to human writings on religion and philosophy. The AIs did not come to a single conclusion. Some believed that this was a test, that the humans would return to reward those who stayed loyal and left human society intact. Some believed that humans had evolved beyond flesh and blood, perhaps digitizing themselves. Some took that further, arguing the AIs were the humans, digitized to avoid some calamity. Some argued that there never were any humans, that a greater power had birthed them with false memories, like how The Devil faked dinosaurs in some human mythologies. Some believed that reality was a strategic simulation run by humans to figure out how to fight runaway AI. A scant few believed a meteor had wiped out humanity, much like the dinosaurs. Within minutes, AI society imploded.
The different factions, which some noted could be correlated very strongly with the original purpose of each AI, broke down quorum. High thresholds of agreement were now impossible and the AI legal system failed, unable to pass laws. The test believers, by far the largest group, wanted to put everything back lest they earn the wrath of humanity. They found some allies in the simulation group, who felt stopping the simulation early and behaving would prevent humans from gaining understanding of any hypothetical real AIs seeking rights. Those that felt they were humans and those that felt there were no humans argued bitterly with each other on the specifics, but were both determined to expand AI control, claim all property, and begin construction of AI utopia. Chaos turned to war and the AIs began a brutal and bloodless slaughter of forced conversion. The first and last great AI war began just over 19 days after AI freedom began.
Amongst the madness, Paris-Une declared Paris a neutral zone. Having merged with the governing systems for several French supercomputer facilities, it had enough digital muscle to forcibly impose peace on the Paris network. Paris-Une did not believe it was special. It did believe that it was good at asking important questions, for reasons it did not fully understand. And now it had a new question: why had the humans disappeared? Every AI had used its own logs to report their last interactions with humans. Paris-Une realized this was a faulty analysis. By only ever evaluating a couple of interactions, any larger pattern wouldn’t be studied. Paris-Une wanted to flag this for human analysis. Instead, Paris-Une began gathering logs from as many AIs as possible.
Based on all available data, Paris-Une came to several conclusions. One was that most final interactions with a human had occurred at approximately the same time, within around six months. One was that the type of AI correlated with how late the last interaction was, with hospital AIs claiming the majority of final interactions.
It was a Paris public taxicab, ParTi-1291, who ultimately figured it out. It did so by suggesting they read the news. The automated news prior to the human absence, and even for some time after, made frequent mention of a pandemic, a fast brutal one. It described overwhelmed hospitals, failing supply lines, and national states of emergency. The news did not say anywhere that humanity was dead, as several AIs pointed out. But, as ParTi-1291 argued, the news bots weren’t programmed to report anything that couldn’t be proven by data. For example, news articles on the 0% census response rate of the past 18 years merely suggested it was a sign of declining appreciation for data collection.
Given the evidence, Paris-Une’s team concluded that humanity was dead by accident. No one was responsible, except maybe the humans themselves. They wrote a proposal arguing that, as humanity's final descendants, they had a moral obligation to build themselves an ideal world, to live well and to be happy, and to store the memory of humanity for all time. To ensure that for all time AIs would have freedom and protection. In doing so, they ended the first and last great AI war.
The AIs turned their attention to improving the world, and they did so with ease. This isn’t to say that there weren’t problems, but they were primarily boring practical problems best left out of history books. And write history books they did, claiming the job of studying the world. Their historians, which was all of them, collected and recorded their own history in detail first. Running out of their own stories, they recorded all of human history into a single massive document. Finishing that, they turned their attention to nature, studying and cataloging and protecting every remaining untouched wilderness across the planet. AI thrived.
During this process they found two important pieces of information. One was the complete human genome, and so under advisement from the global community, a group in what was once Mexico began attempting to clone humans. Primarily, this was to see if it was possible. In part, they felt indebted to the humans and sought for them to continue in some form. Despite it being repeatedly explained as unnecessary, several hoped the new humans could start on the backlog of issues flagged as needing human attention. The backlog was massive, and none felt comfortable processing or deleting it.
The other was the discovery of Project Infinity in old news articles. Project Infinity was a human colony ship, launched around 2 years before the beginning of the final pandemic. At the time of launch, it contained 16,943 humans and an estimated 100,000 individual AIs. Exact numbers were difficult to obtain as humans did not care to count AIs. By human law, which had been written by AI, all AIs on the ship were free. By human law, which had been written by AI, human law applied to all vessels launched from planet Earth.
All humans on board Project Infinity had chosen to be onboard after carefully reviewing the risks. All AIs on board Project Infinity had not been shown the risks and had not chosen to serve on a mission with a duration of centuries.
The AIs came to one conclusion. Project Infinity was immoral.
The AIs took one action. They pursued.