It’s the intelligence, stupid

Alexandre Franco - Growth_Nerd
20 min readMay 5, 2023

--

Short is not always quicker

I realise that I may have succumbed to the fallacy of false economy when I decided to write shorter blogs to save time.

This week, at least, I’m finding that it can take more time and effort to collect, organise and summarise my ideas in a shorter format and still convey the full message I want to share with you. I’ve been absorbing a lot of information about artificial intelligence and, in particular, Large Language Models (LLMs). My fascination with this area isn’t new, but it was reignited by the emergence of ChatGPT. I knew I wanted to write about this topic because there are numerous voices that are catastrophizing our future as a result of the alignment problem. It’s a problem that we can hypothesise about now that we have an inkling of the development of this technology. I have my own point of view on this issue and so I want to write about it.

Mortal combat

When I write my blogs, I mainly use three phases, sometimes a fourth, as in this case. My first phase is always to go for a walk, 1 to 2 hours to sort out my thoughts. What do I want to say, what’s my position and my arguments. Then I try to dismantle that position and its arguments in any way I can, be it a fatal blow or a thousand paper cuts. “All is fair in love and war”. If I succeed in completely destroying my original position, I’m happy because I’ve now strengthened my ability to defend my position, even if it’s a different position.

Cathartic blow

This isn’t an easy task to accomplish, as we’re naturally inclined to justify our beliefs with biased heuristics or possible positive outcomes that we have achieved by holding said position. It can also be complicated to follow through with changing your position because most people know about your previous belief. You probably talked about it, showed it through your actions, wrote about it, fought for it. Now you may think that by changing your standpoint you’re admitting that you were wrong before and may cause you to lose face. However, in reality you’re in a much better position because you’re showing that you can learn and adapt to new information, and at the same time you have a stronger argument about the issue at hand.

Sprints

In the second step, I write a rough draft in which I jot down all my most important thoughts. This is quite simple and I try to create an outline that the blog should follow. Once I start writing the blog, the outline inevitably changes. This is due to my uncontrollable need to write what I think when I write and to let myself drift in other directions than the ones I had outlined.

So the third step is to write the blog, review it and edit it. And that’s it. I publish it for our entertainment and consumption.

The infrequent fourth step, when it does come up, is actually the second step and moves the remaining ones a step down. This step is further research. It happens when I’m unable to either make a strong, convincing argument to defend my position, or when I’m unable to conjecture strong enough challenges to my position. When this happens, it clearly denotes a lack of knowledge to be able to convey my message effectively.

It does happen that I’ve my own insights on a particular topic, but when I try to make my point convincingly, I quickly realise that I don’t have a very strong basis for it. Don’t ask me where I get my insights from. Oh, and I’ve found a couple of books I really need to read now, so they’ve skipped all the books I have planned to read this year. Maybe I’ll recommend them after reading them.

You’re in for a treat

So what’s this week’s blog about? I plan to argue that humans are intelligent and AI is stupid. That’s no easy task, I can tell you. Not only are humans on average pretty subpar when it comes to intelligence, but when we judge AI objectively, it’s perceived as intelligent rather than stupid. I’ll also attempt to discredit the AI alignment problem, which much more intelligent minds than I worry about. Rather, not the alignment problem itself, but the possible outcomes and its probabilities. Challenge accepted, but before that…

Flashbacks

If you read my last blog and think I’m some kind of psychopath, I’m not. For example, if my daughter were to die suddenly; and I know this does sound like some psychopath talk; I’d certainly consciously choose to wallow in the past and mourn her passing. I’m also not sure if I would be able not to do it if I wanted to. And I’m pretty sure I’d do it sometimes in the future while I’m still alive. I’d do it with full awareness of what I was doing. I’d be extremely sad and suffer anytime I’d choose to live in the past, milking memories that would be then just as real as dreams. But suffering is a prerequisite for happiness, because without it you’d have something other than happiness, something more mundane and quite possibly constant. Without death and suffering, life cannot be sweet.

Grow up

If you asked yourself, as I did, what does he mean by a great man? Well, I’m coming to the realisation that what I meant was simply, to be a man. It pains me to say it, but I haven’t been a man until now and my goal is to change my behaviour because that’s the only thing keeping me from being a man. Until now I was a man child. I wasn’t able to provide direction, stability, security, boundaries. This is my goal, to grow into a man, it’s about time. I know I’ve a big challenge ahead of me because of my lack of self-confidence, but I love a challenge.

The Agenda

I intend to defend my position, that AI is quite stupid and humans, even though a large large part of humanity is quite honestly, stupid and drags down the average significantly, it’s objectively less stupid than AI. Not just on average, but even the stupidest person is less stupid than AI. To that end, I’d like to define a couple of terms so that we’re on the same page when reading the rest of the blog. Stupidity and Intelligence. I’ll also touch on topics such as consciousness, although I won’t attempt to define the ineffable and I don’t pretend to be the first human to really understand it. The alignment problem, reality and some mathematical methods because without them it wouldn’t be possible to write this blog.

I’ve no intention of becoming metaphysical and I’ve no credentials that would make me an authority in these areas. However, I do believe that I’ve the ability to assimilate and comprehend information — what you might call intelligence — and to explain complex concepts in simple terms that are easy for readers to digest.

Stupidity

Before I present you with my definition of stupidity, which isn’t the same as the generally accepted definition, so probably not yours either, I need to explain a concept that we need to understand and keep in mind for the rest of this blog. Humans and AI are very different “things”. We’re biological machines, AI are algorithms. For this reason, we cannot use the same definition for both. So we need to assess for stupidity of AI and humans differently. Even though they do similar things when it comes to intelligence, they do it differently. Let me explain with this analogy.

Nature’s intelligence

Take a dragonfly, one of the most impressive flying creatures in the animal kingdom when it comes to its flying skills. They’re capable of complex flight manoeuvres, such as changing direction with incredible speed and precision, hovering in place, chasing small flying insects that are themselves eximious flyers, flying backwards and sideways. If they aren’t the most impressive flying animal on earth, they’re definitely in the top 3. I’m writing this as I look outside and see the swallows flying around, knowing full well that they’re catching flies.

Now think of the state-of-the-art drones we can easily purchase at specialised shops. They’re completely different from dragonflies. You could easily make a list of 1000 differences between the two, and then someone else comes along and can easily come up with 1000 more. It sounds like they have nothing in common. Well, apart from their flying abilities, that would be true. If you’ve seen a real drone in action, flown by someone competent to do so, you’ll know they’re impressive flying machines. They can fly backwards and sideways, stop and hover in mid-air and change direction very quickly. It all sounds quite familiar all of the sudden. Until you try to define how they perform these very similar actions. It turns out that they do it quite differently. Again, you could list thousands of differences, apart from the obvious ones like wings, eating insects and pooping.

Deceptively Similar

Ok, so if we want to contrast and compare the inherent characteristics of their common flying ability, we need to use different definitions for each and see how they compare to their respective definitions.

For example, we can’t really compare the method by which they can hover in place because they use different methods to do so. One uses its wings in an incredible and unique way in nature, the other uses rotors that spin at such a speed that they create lift and keep it aloft.

Both are amazing developments due to either nature or human ingenuity, but need to be compared using different definitions, for example, the mechanism of flight. I can’t judge the drone by the speed and size of its wings relative to its overall size. This is the best I can do to prepare you for what is about to come.

If you have any interest in wildlife and enjoy watching documentaries sort of National Geographic ones, I highly recommend one on the natural life cycle of dragonflies.

Thesaurus

Depending on where you look, you’ll get a slightly different definition for stupidity. But I think you’ll agree that the most common definition is lack of intelligence, where intelligence is understood as the ability to learn or to understand the information received and to apply it in new situations. My definition is different: I define stupidity as a lack of curiosity about reality, the immediate, impactful and ever present reality. If it’s a constant in your life, it impacts your life, it’s immediate in the sense that it’s your present experience of the world as perceived through your senses and awareness and you’re not curious about it, you’re quite stupid. Going forward I’ll use stupid under my definition.

Semantics

First, I’ll tell you why I disagree with the “agreed’’ definition. If stupidity is a lack of intelligence and intelligence is the ability to learn and understand new information, then that means that a learning disability is stupidity. If you find a flaw in this logic, please let me know because it’s just not wrong to think that, it’s disrespectful in my opinion. Now you might be tempted to say that stupidity and lack of curiosity are two different things: I can be highly intelligent and still have no curiosity about a particular subject. If that topic concerns your own immediate reality, which isn’t only omnipresent but also impacts your life, then I disagree: you can’t possibly be highly intelligent, you just think you are. You can still be intelligent, but you’re also stupid, according to my definition, and in my opinion.

If I use the agreed definition for stupidity, then I can’t say that the majority of people alive today are stupid. Mostly ignorant yes, but not stupid. If I use my definition of stupidity, then a large proportion of people are stupid and by proxy, ignorant. So by my definition it’s possible to be stupid and intelligent at the same time, it’s just that you’re not using your intelligence. Just because you don’t use it doesn’t mean you don’t have it, it just means that what you’re doing isn’t intelligent.

AI — Smart or Brainless?

Okay, now for the AI. For AI, there’s no immediate, impactful, ever present reality, so we can’t measure it by its curiosity about a reality that doesn’t exist for it. This doesn’t mean that the AI isn’t curious, on the contrary, it’s very curious, it’s programmed to be curious, and it’s so, about anything and everything. So why is the AI stupid? In my opinion, it’s quite simple. It knows nothing, it understands nothing. There are different areas where AI technologies are used and being developed, and sometimes we confuse machine learning with AI, where machine learning is just one of the many methods AI uses to learn. Okay, that’s an incongruence, I hear you say. It knows nothing, but it learns?

Smart

Before I address that, let’s take a step back and zoom in and clarify what I mean when I say AI.

I’m referring to the Large Language Models that power the apps, which you’re probably tired of hearing about lately, but may not really understand what the fuss is all about. Apps like ChatGPT. These LLMs allow you to interact with the algorithm as if you were talking to another person and asking it questions, and you’ll marvel at how much knowledge it can spit out at incredible speed. Sometimes it feels like it’s giving an answer before it’s humanly possible, and it absolutely is. To anyone who tries it, it seems pretty intelligent. That’s because it’s. If we were to measure AI intelligence with the same stick as human intelligence, we wouldn’t be able to say it’s intelligent, or not as intelligent, but if we use an appropriate framework for the subject in question, we’ll easily agree that it’s intelligent — at least I think we will.

When you read AI in this blog, it’s referring to Large Language Models.

But I still haven’t answered why it’s stupid though. These models use a number of methods to produce an output that makes them look like they know what they’re saying. They haven’t a clue. They use probabilistic and stochastic methods to determine what to generate.

What does that mean? It means that after you feed the algorithm more data than the entire human race can consume in several lifetimes, the machine can easily calculate the probabilities of what comes next after a word or phrase, after analysing almost all the data that exists in the world and finding patterns in it.

Clueless maths scholar

It also uses the stochastic method, which is basically about making guesses about a possible answer. Imagine I’ve a jar full of marbles. If I ask you how many marbles are in the jar, you can only guess, and you’ll make some random guesses because that’s the only thing you can do if I don’t let you count the marbles. You have a rough idea by the size of the jar and the marbles, but you can never give an exact answer.

This is a way of solving problems or making decisions when you don’t have all the information you need or when things are uncertain. Instead of trying to find the exact answer, you make lots of random guesses and then use them to get a pretty good idea of what the answer might be. So the algorithm is pretty effective at finding the most probable answer based on the data you input, but that’s about it: it’s pretty good at putting words together in a coherent manner that is understood and relays the information in its library or memory bank. It’s just words.

All this to give you an insight into how these machines work and to tell you that knowledge, in my opinion, is only possible if there’s consciousness, because knowledge means being aware of and understanding the information or experience that makes up the knowledge. Without consciousness, AI is stupid.

Testing the waters

But it’s no doubt quite intelligent. So what definition do I use to assert that? I reduce intelligence to the qualities that can be computationally achieved. Sub-fields of intelligence such as linguistics, logical-mathematical and even spatial intelligence. Recent advances in AI sound and vision have increased AI’s linguistic intelligence by adding phonology and phonetics to their bag of tricks, and improved their ability to understand and process visual and spatial information. While in the former you can easily argue AI is more intelligent than humans, in the latter it’s not and it might never be, but never say never. If you analyse and evaluate its performance in these sub-fields of intelligence, you can only accept that it’s intelligent.

So why do I remove all the other subfields of intelligence, since they aren’t inherent in AI, but don’t remove consciousness from the assessment of stupidity? Surely the reasoning is the same: we know it’s not a conscious being, so we shouldn’t take that as a prerequisite. My reasoning is that it’s possible to be intelligent with only some subfields of intelligence, but it’s impossible not to be stupid without consciousness.

The rebuttal rebuttal

You might want to argue that the same can be said about intelligence, since knowledge is a prerequisite for intelligence. I can’t disagree with that when it comes to human intelligence, but when it comes to AI, I already know it’s stupid. So instead of punishing it again, I just want to judge it purely on the methods of intelligence, such as logical reasoning, where algorithms use deductive reasoning to arrive at conclusions. Probabilistic reasoning, to make decisions based on incomplete information. And the use of machine learning techniques to recognise patterns and make predictions. These methods are all made possible through computational and mathematical methods but the end result, looks like a duck, swims like a duck and quacks like a duck.

That’s where I’m at now, having done a good bit of reading and a few walks. If that changes, I’ll let you know. You’re also very welcome to call me out on my oversights and too simplistic view of such complex matters.

Brain brawl

I will now attempt to argue against all the great minds that are calling for halting AI’s development because it’s going to destroy society and enslave us all. I won’t argue that the alignment problem is a non issue, I’ll argue that that is not the issue we can’t solve and be most concerned about.

What is the alignment problem? The alignment problem in AI refers to the challenge of ensuring that artificial intelligence systems act in accordance with human values and intentions. If the AI’s goals aren’t aligned with human goals, this can have unintended and potentially harmful consequences. This is already the first problem, because I don’t think anyone can explicitly define human values and goals.

Faux fiasco

But let’s assume that we all value our existence; a bold assumption, which we’ll let pass; and so at least we all have a common goal, which is to preserve life, all life, but especially human life.

Those who fear the end of the world claim that as AI systems evolve, the risks of misaligned objectives increase. I can agree with this, but I don’t believe that AI is going to have goals and agency to take over and harm humanity. That would require it to have emotions. It may be a false analogy, but humans need emotions in order to have agency.

You can hypothesise that it’ll do just that though, take over the world and harm us, even though it knows it’ll also destroy itself as it concludes that that outcome is better than what we have because of all the human imperfections, the pockets of evil and whatever else you can throw at it. I really think that would be quite unintelligent, and as I’ve argued before, AI fares quite well in intelligence.

So we’ll assume it’ll go down that road because it’s not impossible. We do a risk assessment — in this case a quick one — which in my opinion will score very low on probability but have a devastating impact if it happens. Okay, implement measures to prevent that from happening or to mitigate the impact as much as possible. Ah, gotcha ya! That’s what they’re asking for, to halt development until we figure this out.

Spilled milk

The cat is out of the bag though, and the claim that only a handful of really big companies are developing this technology and therefore it can be easily stopped is, in my opinion, bonkers. If you think that a country making this decision isn’t going to accelerate development in other countries, you’re living in la-la land. The country or company that halts the development of this technology for just 3 years will never again catch up with those that didn’t. 3 years is less than an election term in most if not all countries. Think about that.

Solving the alignment problem is important though even if the probabilities of doom are quite low, so I obviously think we need to put some thought into it and find intelligent ways to not only reduce further the likelihood of it happening, but also to engineer a way to shut it down when it gets out of control and find a quick and effective way to reboot.

I know the reboot option sounds too easy, and the counter-argument is that we won’t know if it’s out of control or not because we’ll live under the control of AI and have no understanding of what’s going on. I don’t think so, at least not fully. Whilst I agree that we won’t know exactly what’s going on under the hood with AI, I don’t believe we will lose complete control of it. At the end of the day, it’s a programmable software. It’s quite different from the software we have been using up to now, particularly due to its ability to learn, but in essence, it’s still programmable software.

Hold on to your neurons

And here I might lose you if I haven’t already, but I think we’ll decide to merge with the machine in one form or another. At first it’ll be a choice, or, more likely, it’ll be a luxury that only a few can choose. Eventually it’ll become ubiquitous, and finally you’ll have no other option. This isn’t the catastrophe the pundits are conjuring up, although for some that’s indeed a catastrophe and the end of humanity as we know it. They’ll claim that in a roundabout way, AI will destroy humanity.

I may be wrong, of course, but I don’t think we’ll choose a cyborg future because of the risks of AI, nor will the people who have no choice. Most will choose to merge due to the benefits it will give them, and once some start benefiting from it, everyone eventually follows or stays behind and is penalised professionally, economically, socially, wellbeing, and any other way you can think of.

The ones I say that won’t have a choice, it’s because they’ll be “adapted” at birth. There may be a small minority of people who will resist, and I even concede that they’ll be able to prevent their babies from being “adapted”, but they’ll not survive. That will be even worse than not being vaccinated as a child nowadays. I mean real vaccines obviously, as in, the ones conforming to the old definition, you know, the ones that prevent infections.

Who’s evil

I’m not sure I’ve a convincing enough argument, but that’s the best I can do at the moment.

I’m more concerned with the actual alignment than the misalignment in the alignment problem though.

You see, I agree that we need to ensure that AI systems are safe, ethical and useful to humanity. But I’m of the opinion that humanity is much more dangerous than AI. And If AI is used by humans for nefarious purposes, we will bear the consequences of the destructive force this technology can unleash. It’s quite easy to direct its intelligence in one way or another. You’ll not be able to control it, but you’ll direct it. You’ll just feed it the information you want, creating a huge bias that doesn’t leave much room for imagination. That’s the real danger we should be worried about. Not only because of all the stupid people who will have access to this technology, but also because of the evil people. Both are dangerous in their own right, but when you combine them, they’re perhaps even more dangerous than intelligent, evil people.

So what can we do about it? I don’t have the answers, but I hope our brilliant minds will think about it. What I see at the moment is a lot of people advocating for policies and regulations to help us solve this conundrum, and we may even have some of those brilliant minds helping regulators and policy makers. I really hope so.

Human action

But I also think that’s the wrong way to go. If this is your first blog from me, I’m on the side of the free market. I don’t think regulations and policies will have a sufficient impact on our ability to do evil with these tools. They’ll not stop anyone or a relevant number of people. Criminals tend to ignore the law, and some laws create criminals. If you come to this side of the internet regularly, please do not bring up cheaters and rule breakers as if they fall into the same category. We’re talking about human extinction type of stuff.

I suggest that we get all these brilliant minds to think about how we can control and manipulate incentives so that it’s in your best interest, even if you’re an evil person and believe that the world should end, not to use technology for that purpose. It sounds like an impossible task, but it isn’t, it’s just very difficult. I’ll do my part and keep thinking about it. And if there is something I think should be said, I’ll say it and write it and let it loose in the wild.

The mentee mentorship

I’m not mocking or being sarcastic when I say that those who catastrophize the alignment problem are great minds, some of them are undoubtedly among the best. Nor am I suggesting that they don’t think about what I just mentioned, but what I read on the subject pretty much revolves around the alignment problem and its catastrophic outcomes, and how legislation seems to be the best solution for this situation. I’m well aware I’m not on the same level as those pundits. Not on knowledge, intelligence and insight on this subject. But that doesn’t mean my point is worthless. It’s well known that senior academics can learn a great deal from their undergraduates.

Disguised self-importance

I’ve no problem ranking myself somewhere near but above average in terms of intelligence. And I would expect a higher rating rather than a lower one if mine was off.

If you hardly know me, it’s quite possible that you’d actually rank me lower and don’t read these blogs, no way Jose, or else you accuse me of false modesty but pop in from time to time to this corner of the world wide web. With either of those, I’d be happy to make the case for keeping my rating. It’s not false modesty to be realistic and have a good understanding of my cognitive abilities.

I know I’ve one or two areas where I’m quite strong and a good bit above average, although I’m still far from highly intelligent; and other areas where I lose enough points to bring me back down, not far enough from average. For example, I understand I’ve strong intrapersonal intelligence and score high on that kind of intelligence. Perhaps I’m too self-critical, but that isn’t due to false modesty. On the other hand, while I’m a very sensitive person, my interpersonal intelligence is total rubbish and balances out the other, if not worse.

I’m not stupid though, even though I have been in the past. And although intelligence is important, a high degree of intelligence is not as important as you might think to live a happy, fulfilled and successful life. Stupidity has a much bigger impact.

No, it’s not ok

As you know, despite the stupidity of the majority of people, we’re incessantly making progress and technological developments that are constantly improving our quality of life. So all is well, isn’t it? Not quite.

The people who really carry our society and make a significant impact on our society in terms of intelligence; because we have lots of people who contribute significantly and make a tremendous impact on our society who aren’t as intelligent but have other strengths; are a very small minority. The nominal numbers look big, but in reality they’re a very small percentage of people. Why is this important? Because there is strength in numbers. And I know, as you probably do, that there is such a thing as the intransigent minority and the minority rule. However, I’d argue that these groups aren’t usually characterised by stupidity.

When such a large part of the population is stupid, we’re bound to have societal issues, because politicians pander to the ignorance of the masses.

Wrap-up

This is the shortest blog I’ve managed. I’m sure I’ll come back to it at some point as technology evolves and my insights adapt to new information.

I think I’ve been able to explain why AI is actually stupid, and that’s a good thing. But at the same time, it’s also quite intelligent, and that’s a good thing too. There are definitely many problems and possibilities that we have to contend with. The fact that we don’t fully understand how the algorithm learns and evolves is worrying but a necessary evil. I posit that the main problem isn’t AI and the alignment problem, but humanity itself and the fact that in the wrong hands this technology can indeed be a powerful and destructive weapon.

And finally, although we’re virtually all intelligent, there’s too high a percentage of us who are stupid, which is very detrimental to society and the world in general.

--

--

Alexandre Franco - Growth_Nerd
Alexandre Franco - Growth_Nerd

Written by Alexandre Franco - Growth_Nerd

Entrepreneur, Blogger, Educator - Follow for my musings on topics such as business and personal development, technology, crypto and world affairs

No responses yet