While AI has pretty much exploded over the past couple of years, it was an explosion almost 70 years in the making.
Unless you’ve been living under a rock for the past three years (and some would say that’s quite a tempting proposition by the way), you’ll have heard of AI. Artificial Intelligence. The thing that steals all of our jobs. The technology that’s going to take over the world and obliterate us all.
Well, maybe. But not yet. Although it has to be noted that away from the hype, key players hold genuine concerns about how AI might play out. A couple of years ago, the non-profit research and advocacy organisation Center for AI Safety released a one-sentence statement, signed by some of the key bods who have helped develop AI – including the leaders of Open AI, Google, and Anthropic.
The statement simply said this:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear wars.
And, while subsequent research has concluded the likes of ChatGPT and other large language models (LLMs) do not pose an existential threat to humanity, the fact remains that we’re entering an era of the unknown and uncertain. At great pace.
However, while all things AI are progressing rapidly now, momentum’s been building up for probably longer than you imagine.
Artificial intelligence in the 1940s-1950s – the early days of AI
The name of Alan Turing is world-renowned today, having played a pivotal role in World War II with the development of the Enigma code (the story of which is the subject of the excellent 2014 movie The Imitation Game).
Around the same time, Turing also worked on a paper, published in 1950, named Computing Machinery and Intelligence.
The paper’s first section is titled ‘The Imitation Game’ – hence the title of the movie – and it seeks to explore the question: ‘Can machines think?’
And from there, things really started to pick up pace.
In 1956, a conference called The Dartmouth Summer Research Project on Artificial Intelligence was held, which is now widely accepted as being the ‘official’ beginning of AI.
The conference, organised by a chap called John McCarthy – an American computer scientist who spent almost 40 years as a professor at Stanford – brought together other pivotal minds in the field, including Marvin Minsky, Nathaniel Rochester and Claude Shannon. They proposed ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’.
Over the next 20 years, innovation continued apace, and some significant breakthroughs were made. In 1959, the term ‘machine learning’ was coined by Arthur Samuel, while working on a program that could play checkers and learn from its previous games. In the same year, the first industrial robot, called Unimate, began working on a General Motors assembly line. By 1961, it was the first mass-produced robot arm for factory automation. And in 1966, one of the first chatbots was created by Joseph Weizenbaum. Eliza used simple pattern matching and keyword recognition, and while basic, it showed the potential for ‘natural language recognition’.
In the late 60s and early 70s at Stanford – where John McCarthy was now firmly in position – Shakey the Robot was developed, with the ability to perceive and reason about its environment.
The AI winter – mid-1970s
Technology usually follows a pretty consistent hype curve. Ideas are proposed, things seem to move at pace for a spell, people get excited, and then… everything goes quiet. Expectations have been elevated, technology and systems aren’t actually developed enough to take that next step, and things quieten down a bit.
A great example of that right now is driverless vehicles. Chat has really quietened down at present about that, but they’ll be back…
And so it was with AI. In the mid-to-late 70s, people’s enthusiasm waned a bit. Funding wasn’t as readily available, and limitations were encountered. And for a few years, little progress was made. However, as we entered a new decade, newfound enthusiasm saw AI projects re-emerge. Albeit temporarily.
Expert systems, operating on ‘if-then’ rules, helped automate certain tasks, while the Japanese Fifth Generation Computer Systems Project aimed to create tech that could translate, converse and reason as a human would.
That wave of enthusiasm waned too, however, and in the late 80s and early 90s, there was a second winter of discontent. Which only thawed with the introduction of the internet, and the subsequent access to the vast amounts of data this enabled.
In 1997, IBM’s Deep Blue beat reigning chess world champion Garry Kasparov, while in 2002, Roomba – a robotic vacuum cleaner – was introduced.
In 2011, the brains at IBM were at it again, creating Watson, which won US quiz show Jeopardy!, while in 2014, Amazon launched Alexa, its popular voice-activated AI assistant.
Of course, the world of AI changed dramatically on 30 November 2022, when OpenAI launched ChatGPT to the world. Generative AI was now available for everyone to use, play with, experiment with, and explore.
And very quickly, lots of things changed – for some people, at least.
It was, however, a long time in the making. And you get the feeling that despite it being almost 70 years since The Dartmouth Summer Research Project on Artificial Intelligence, AI is just getting started.
When was AI invented? A timeline
1950: Alan Turing’s paper ‘Computing Machinery and Intelligence’, exploring ‘Can machines think?’, was published. 1956: The Dartmouth Summer Research Project on Artificial Intelligence, widely considered the ‘official’ beginning of AI, was held. 1959: The term ‘machine learning’ was coined by Arthur Samuel. 1961: The first mass-produced robotic arm for factory automation, Unimate, was working on General Motors’ assembly line. 1966: One of the first chatbots, Eliza, was created by Joseph Weizenbaum. Late 1960s/early 1970s: Shakey the Robot, with the ability to perceive and reason about its environment, was developed at Stanford. Mid-to-late 1970s: The first ‘AI winter’ occurred, with waning enthusiasm and reduced funding. Late 1980s/early 1990s: A second ‘AI winter’ took place. 1997: IBM’s Deep Blue beat reigning chess world champion Garry Kasparov. 2002: Roomba, a robotic vacuum cleaner, was introduced. 2011: IBM’s Watson won the US quiz show Jeopardy! 2014: Amazon launched Alexa, its popular voice-activated AI assistant. 2022: OpenAI launched ChatGPT on 30 November.
As an electrical contractor you’ve got a duty of care to look after those around you, and ensure your workplace is safe. [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/news/electrical-contractor-duty-of-care/">Read More...<span class="screen-reader-text"> from Understanding your duty of care as an electrical contractor</span></a></p>
Product Description This practical drum fan has a metal stand and a 30 cm head with adjustable tilt.Mistral 15,600mAh Rechargeable Battery-Operated Drum Fan, 12 Inch Portable 3-25 hourslong-lasting fan with various speeds. Product Features Enter below for your chance to win the Major Prize: [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/products/trader-gsm-electrical-mistral-fan/">Read More...<span class="screen-reader-text"> from Trader GSM Electrical – Mistral 35 cm Rechargeable Drum Fan</span></a></p>
The Tamworth electrical wholesaler GCM is a firmly established part of the local community – and new owners Allan and Chloe Bushell are writing its exciting next chapter. [...]<p><a class="btn btn-secondary understrap-read-more-link" href="https://gemcell.com.au/news/gcm-electrical-wholesaler-tamworth/">Read More...<span class="screen-reader-text"> from The evolution of GCM Electrical, Lighting and Automotive, Tamworth</span></a></p>