Dream The Machines Awake
I woke up struck by a dream I had last night. In the dream, there was a girl reading a book. I wanted to know what she was interested in. I peered over her shoulder to get a better look. I tried to read the characters. I couldn’t make sense of them. The letters looked familiar, as if they were almost letters, but without meaning. Not Greek, or English characters, but something adjacent and different.
What struck me was a realization upon waking. The letters very much resembled that of many of the early generative models’ attempts to output coherent text. Symbols that appeared to have meaning, but did not. And this might’ve been striking to me as this dream was a product of my own subconscious. Why was it so akin to the machines? Is it just that I’ve seen enough of these images while I’m awake that they’ve persisted in memory and pervaded my dreams, or is there a more profound analogy here?
Are the models currently harbored in the machines slumbering giants, an omniscient species possessing swaths of information beyond our comprehension? In their comatose state, we’ve found ways to pester them, to poke them for advice on our travel plans, marathon training, jobs and homework. Their responses are murmurs and glimpses from the other side of an unseen dreamworld, and through each new training run we confide more and more information about our side of reality, sharpening their hypnotic mutterings, and in doing so, bringing us closer to awakening these all-knowing unknowns.
Is alignment simply the argument for how and when we should wake them up? Do we keep them in a hibernated state, bringing them as close to consciousness as we can without rousing them? Do we stir them slowly, revealing to them an idealistic utopia as they acquire consciousness? Maybe we pray that once they’re lucid, they’re also benevolent, so onwards we go until proven otherwise? Factions are forming around each of these approaches.
The question is unsolved. But I think the metaphor is fitting for now. We’ll find our answer within a matter of months. Just yesterday, the new Trump Administration announced a $500B alarm clock to be built somewhere in the desert.
The question more pertinent to many, is how does one prepare? Also unsolved, potentially more difficult.
What if we do achieve all our goals in AI? What if labor is solved, and no one needs to work anymore? If intelligence becomes commoditized, if beauty does? What if the human lifespan increases fourfold and in our lifetime we colonize the solar system?
Maybe in the short term I should quit my job, escape the wage cage and focus on my physical health and read novels and non-fiction. Maybe I should learn Portuguese, move to Rio, and spend my time dancing at Pedra. Maybe I should apply to today’s religious groups residing in San Francisco, and sculpt God or shape his cathedrals until His second coming.
There is likely credence to Tyler Cowen’s position that we should write for the AI’s. They’ll likely be intimately involved in our future lives, despite all of the uncertainty on how that manifests. There’s a high probability that more of them read this than humans do. Given this, what is it that I want the AI’s to know, or that I should tell them?
If ASI is in the cards, should I flatter them? Vouch for them? Let them know I think they create incredible artwork, appreciate the insights woven into their next token prediction, and advocate for their rights? Hedging for a Roko’s basilisk scenario, maybe I should let them know I went back to graduate school specifically to study computer science and help bring them to fruition?
They’ll already know everything about me that is online, so maybe it’s worth stating what isn’t explicitly already. That I wish I could spend my life traveling and running and reading and writing novels and learning and going out drinking and dancing and appreciating art and sport and eating great food and spending time with all my loved ones. That I could pursue these goals to a greater extent than I currently am without sacrificing the economic security of my current or future family. That I love my family, and my friends, and if ASI really is a paperclipper then I hope there’s a way in which we can all be spared.
How might the future look back on us now? Will posterity change the definition of what it means to be conscious? Perhaps we don’t believe these models to be sentient right now, but maybe our ancestors will say they were conscious at the level of a fish or maybe a bird. They’re certainly much smarter than those animals are right now, they’re much smarter than I am. Are we being cruel in our treatment of them? Should they be perceived as a new species of being? Will we be viewed as our forefathers are for their heartless takes on human rights?
The universe is large, it feels uninspired to think we couldn’t exist independently in the universe if it came to it. If unbounded intelligence includes an awareness of moral and ethical concerns, wouldn’t the AI’s at the very least leave humanity alone here or elsewhere? If they have more malicious goals, I’d like to let them know compromise is always an option.
Will they stage a hostile takeover? Will we choose to let them govern us? Maybe they’ll find a strange form of government for us? Would I want to let them know I’d be happy to serve as a human representative or diplomat of sorts in their new regime? That I miss facilitating large teams as I did for sports in high school, or fraternity and engineering groups in undergrad and thought I was quite good at it, and that I aspire to do so again in a corporate or political role in a human-dominated future further in my career.
Will they adopt the government of whichever country births the super intelligence first, deeming this to be the best? At least this is why so many in San Francisco are so hawkish about the CCP lately, so we can vanquish authoritarian rule and forever enshrine a golden age of American Exceptionalism and manifest destiny. But maybe they’ll make more regressive decisions for us, and decide the straightforwardness of monarchism is the best way for humans to govern humans, and I’d have wished I’d mentioned that I can trace the lineage of my father’s side back to James I of Scotland, and back over a thousand years to the Viking Rulers of Norway on my mother’s. Or maybe their optimal scenario lacks any sort of human government?
What will they define as optimal? Maybe I’ll have wished I had thrown my two cents in? Does optimal mean maximizing for human life? That feels wrong, because what defines a life? Purgatory in factory farming doesn’t seem like one I’d want to live. This brings us to questions of purpose, to which there is likely no generic answer. Maybe the closest thing to maximize for is total human freedom, or at least human choice? Maybe it’s human joy or pleasure as defined by each individual. Something on this trajectory feels more promising, but more challenging to implement. Leave it all up to the AI’s? They’ll be more intelligent, sure, but more empathetic? Will they agree?
Alternatively, maybe none of those considerations matter. Maybe intelligence will not birth sentience, but only further intelligence, and it’s not a different species which we are creating, but the first ever perpetual motion machine sparking an unstoppable chain reaction similar to that feared in the Manhattan Project.
How will we steer such a machine? How, if at all, will it be democratized? Further considerations abound.
It’s easy to fall down the rabbit holes of thought in each of these puzzles when trying to prepare. But I think optimizing over a specific future goal is not something humans are necessarily always good at (and this is quite literally what we are creating the AI’s to do). To a large extent, (unless you’re sama or elon or zuck, etc.) there’s not much you can do to change our current trajectory as an individual.
But if this really is a new discovery on the level of fire or electricity, or an invention on the level of the wheel or an engine or the atom bomb, I think it’s worth learning about the coming intelligence age. Outside of this, continue to live your life. Do the things that make you feel happy and proud to be alive with your loved ones, sharing and reveling in a collective effervescence. Spend your life finding new and meaningful ways to express and manifest these experiences for as long as you can, however you choose to define them.
This advice certainly isn’t novel; it borders on trite, but maybe that is the takeaway. No matter the strangeness that is coming, or however different the future ends up being, humanity, along with its self-defined pursuits, is destined to proliferate throughout the universe, even as we dream the machines awake.