Artificial Intelligence for the Naturally Stupid; Some Thoughts!

For the research section of my Laidlaw Scholarship, I went to London to attend the 2024 AI Summit. It was confusing. Speakers enjoyed announcing their rehearsed ESG statements in front of crowds of corporate representatives; bureaucratic phantoms who consistently showed out a bigger crowd in the “networking zone” as opposed to the “headline stage on AI’s future”. Perhaps this is a metaphor for something. The AI narrative which we are now all familiar with was in full swing - groundbreaking, overzealous and ambiguous. In this piece, I will offer some useful context for any contemporary discussion on Artificial Intelligence. Context I feel strongly is often left out.
Section 1: Defining AI
Apologies, but firstly, I will just quickly cover the importance of definitions. I mean it quite literally to say that any productive analysis of “Artificial Intelligence” must start with clarifying what specifically in the field of artificial Intelligence is being analysed. The media’s lexicon for AI does not seem to be keeping up with the level of innovation and diversity in the field. To the extent that many of the products that fall under the category of “Artificial Intelligence” are so distinct that there are no useful generalisations that can be made across them. The only connecting thread is that they are software that does something that appears intelligent; Artificial Intelligence. For a quick introduction, I suggest this and, for a slightly more holistic approach, Section I and II of Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. For the remainder of this post, I will focus on Deep Learning.
Section 2: A Brief anthology of humans and Tech
Historically, philosophy has often made a clear distinction between episteme (knowledge or science) and tekhne (craft or process). Episteme is seen as theoretical knowledge, while tekhne is practical, technical know-how. Just as the architect is celebrated for their astute design and theoretical for-sight, whilst the builder’s execution is overlooked. This separation has led to a marginalization of technics (and specifically, technology) in philosophical discourse, treating it as secondary to pure intellectual pursuit. I believe that this is a suppression of the true role that technics plays in human existence. The emergence of technics is not just a parallel development alongside humanity but is deeply intertwined with the genesis of humanity itself. The development of technology has shaped humanity in a very similar way that humanity has shaped the development of technology.
Today, technology seems to be the linchpin our experience; phones joined at the hip, it is incomprehensible to navigate the world without access to something as simple as… the entire corpus of global knowledge at a disposable search away. However, humanities' coexistence with technics predates our contemporary milieu, where this relationship now seems to be inextricable. Consider the Antikythera mechanism, an ancient Greek analog computer used to predict astronomical positions and eclipses from the second century BCE; or Hero of Alexandria’s steam-powered aeolipile from the first century CE; Su Song’s astronomical clock tower from the eleventh century; the Gutenberg printing press from the fifteenth century, and Blaise Pascal’s early mechanical calculator from the seventeenth century. More specifically, consider the the invention of the clock which not merely a tool for measuring hours but a radical development that revolutionized human society. It introduced a new precision in timekeeping, allowing activities to be synchronized and coordinated with unprecedented accuracy. This shift from natural, cyclical time to a more rigid, linear conception of time enabled the regulation of daily routines, the organization of labor, and the rise of industrial society. The clock’s impact was so profound that it altered the rhythm of human life, making punctuality and time management central aspects of social and economic life. It is important to remember that this not a natural phenomenon which humans have enjoyed for the majority of our development. This interaction between technics and temporality indicates that our understanding of past, present, and future is mediated through technological development. A world without clocks, calendars and linear temporality is not a world in which we associate nor comprehend.
“We shape our tools and thereafter our tools shape us”
— Father John Culkin
Therefore, it is important to understand what it means to be at a new turning point. True turning points are characterized by uncertainty and transformation. Whether it be the advent of agriculture, the printing press or the internet, a given society's immersion during these changes limits its capacity to fully comprehend them. Indeed, claiming certainty about a turning point negates its unpredictable nature. It is best summarised in;
“If it is a certainty, then it is not a turning point. The fact of being part of the moment in which an epochal change (if there is one) comes about also takes hold of the certain knowledge that would wish to determine this change, making certainty as inappropriate as uncertainty. We are never less able to circumvent ourselves than at such a moment: the discreet force of the turning point is first and foremost that.”
—Maurice Blanchot
So what does this turning point look like?
AI, I do believe, is a turning point that harbours radical change akin to the clock. I want to be very clear on the nature of this change. In this section, I will provide some important caveats to the "exponential model" of AI’s development which presupposes an infinite growth intuition. I.e the media hype that AI is quickly changing the world as you know it type-beat. Just last week, ex-OpenAI Super Intelligence employee Leopold Aschenbrenner (who is widely considered abit of a genius) published a massive essay revealing the supposedly hidden reality that is understood amongst leaders of the AI movement. A hidden reality of radical and rapid change through AI. I will use him as an example.
He broadly argues that the performance of deep learning systems will continue to improve exponentially for a few more years which is sufficient for AI to exceed human intelligence at pretty much all tasks; leaving us with AGI. This phenomenon is widely associated with the idea of The Singularity; which is "a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization". I don't find it particularly convincing; read here if interested.
This, he continues, will lead to an intelligence explosion that will rapidly develop robotics, technology and science overall (see figure 2).
He correctly identifies two major limiting factors; energy and data. For the sake of brevity, I try will summarise how these bottlenecks are grossly underrepresented by Aschenbrenner in a couple hundred words; please read his (now trending) paper further for a more substantial understanding; find it here https://situational-awareness.ai/
On energy, Aschenbrenner presents the following breakdown;
Supposedly, By 2028 the most advanced models will run on 10 GW of power at a cost of several hundred billion by 2030 they'll run at 100GW at a cost of a trillion+ dollars. For comparison a typical power plant delivers something in the range of a single Gwatt. So this means by 2028 they'd have to build 10 power plants in addition to the supercomputer cluster just to power one of the models. He says “Even the 100GW cluster is surprisingly doable…. it would take around 1200 new wells for the clustes”. 1200 new gas wells…!? I knew that the San-Fran-AI-omipotent-hype train was inflated, but to reduce the huge energy limitations to such a gross misunderstanding of power politics in the energy sector is argument-destroying. The claims of this paper are gianttic but are clearly relying on some (albeit seemingly minor) inconsistencies that are in reality, existential for his conclusion. Technological development is characterised by the need to overcome current industry bottlenecks that hinder these intuitions around "rapid progress". Whether it be the miniaturization of transistors was crucial in the semiconductor industry to keep up with Moore's Law, enabling the production of faster and more efficient processors. Similarly, the development of more efficient batteries has been essential to advance the electric vehicle industry, addressing the bottleneck of limited range and long charging times. Aschenbrenner is too quick to dismiss the pragmatics of AI development. This kind of energy goal are simply not gonna happen without nuclear fusion, which, if you been following that space, is not a very likely reality. I recommend this as an intro. It seems crazy to me, but not others, as real actors are looking at its implementation… See here.
Then the issue of data, the idea postulated is that as we approach the ceiling of accessible data, we will deploy robots to collect it. This robot workforce will come from AI and perform all resource collection themselves. This PHYSICAL robot workforce will be self-sustaining and operate towards its own ends to resolve this data problem. Besides the fact that this section is bizarrely ungrounded, reads like science fiction and doesnt acknowledge government policy that is already emerging to restrict this behaviour, sure, this could happen, but predictions of 2-5 years is wild. I honestly have come to think that too many big voices in this space have lost the plot - living in some techno-utopian bubble that has groupthink. Either that, or they are just enjoying the optimism bias in their speculations over the radical growth of this industry. Conveniently, they all seem to have alot of capital invested in this future. Or perhaps, more likely, its is me, with 0 experience in this field who cannot quite comprehend the San Fran- Gossip. In either case, I will fall back on semantics; that radical ‘unpredictability’ is merely a necessary feature of any ‘turning point’ in human development. In no way, however, does that presuppose that efforts towards making predictions are unproductive. These radical false predictions are not a new thing in the history of AI. They have defined the media’s representation of AI since the 1960s and have been made countless times by leaders in the field. I think that leaders involved in frontier research seem to vastly over estimate the pace at which the world can be changed. This discussion is symptomatic of a turning point indeed, in what direction and at what pace is SUPPOSED to be unknown.
Please sign in
If you are a registered user on Laidlaw Scholars Network, please sign in
cool read!