A crucible of ideas
What is Artificial Intelligence (AI)?
September 13, 2020
The Genesis :
The work on Artificial Intelligence (AI) began in earnest soon after World War II, but the name was coined in 1956. A straightforward definition for Artificial intelligence is complex. But the nearest one can get to is
“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making, and translation between languages.”
AI identifies the rhythm of data
In fact, AI is the ability of machines to use algorithms (computer-implementable instructions) to learn from data, and use what has been learned to make decisions like it is in the human would.
AI-based technologies are already a necessary ingredient in a variety of sectors and processes that help humans benefit from significant improvements and increase efficiencies in nearly every aspect of life.
AI technologies are now beginning to offer the ability to see (computer vision), hear (speech recognition), and understand (natural language processing) more than ever before.
It is hoped that AI will be able to make our lives easier by offering suggestions and predictions relating to important questions in our lives, impacting areas like our health, wellbeing, education, work, and how we interact with others.
It will also transform the way we do business by providing competitive advantages to the companies that seek to understand and apply these tools quickly and effectively.
THE BIRTH OF ARTIFICIAL INTELLIGENCE
The Dartmouth Workshop : AI’s birthplace
In 1953 John McCarthy, an American computer scientist and cognitive scientist, convinced Marvin Minsky, Claude Shannon and Nathaniel Rochester to help him put together a team of US researchers who will be interested in automata theory, neural nets, and the study of intelligence.
The team organized a two-month workshop at Dartmouth, a private Ivy League research university in Hanover, New Hampshire, in the summer of 1956. During that time McCarthy was at Stanford and then to Dartmouth College, as an instructor, which later became the official birthplace of the concept of Artificial Intelligence.
There were 10 attendees in all at this special workshop, including Trenchard More from Princeton, Arthur Samuel from IBM, and Ray Solomonoff and Oliver Selfridge from Massachusetts Institute of Technology (MIT).
Then there were two researchers Allen Newell and Herbert Simon from Carnegie Tech, who rather stole the show during this workshop. It was here at this special workshop that the first official usage of the term artificial intelligence was made by McCarthy.
On the occasion of the 50th anniversary of the Dartmouth conference, McCarthy admitted that he had strongly resisted the use of terms like “computer” or “computational”. This was in deference to Norbert Weiner, an American mathematician and philosopher at MIT who was promoting analog cybernetic devices rather than digital computers.
Though the Dartmouth workshop did not make any new breakthroughs, but it was significant since it introduced all major figures to each other. McCarthy conceived the idea of alpha–beta search in 1956, but then he did not publish it. For the next two decades, Artificial Intelligence (AI) will be dominated by these people and their students and colleagues at MIT, CMU, Stanford, and IBM.
From Chess To Checkers were AI’s first tasks
Chess was one of the first tasks undertaken in AI. Though there were early efforts made by many pioneers of computing, including Konrad Zuse in 1945, Norbert Wiener in his book Cybernetics (1948), and Alan Turing in 1950.
But it was Claude Shannon’s scientific article that nailed it. It was about “Programming a Computer for Playing Chess” in 1950 that had the most complete set of ideas, which described a representation for board positions, an evaluation function, quiescence search, and some ideas for selective (non-exhaustive) game-tree search. After this study, John Slater (1950) and the commentators on his article also explored the possibilities for computer chess play.
D G Prinz in 1952 completed a program that solved chess endgame problems. But this program did not play a full game. But the most complete description of a modern chess program was provided by Ernst Heinz in the year 2000, whose DARKTHOUGHT program was the highest-ranked non-commercial PC program at the 1999 world championships.
Also the game of Checkers was the first of the classic games fully played by a computer. Christopher Strachey in 1952 wrote the first working program for checkers.
Understanding Deep Learning, Data in Artificial Intelligence
One of the most recognisable, powerful and emerging technologies of artificial intelligence is deep learning, which is a sub-field of machine learning. It is being used to solve problems which were previously considered too complex. It normally involves huge amounts of data. Deep learning occurs through neural networks--which are layered to recognize complex relationships and patterns in data.
But a prerequisite for deep learning is a huge dataset and computational power. Right now deep learning is currently being used for speech recognition, natural language processing, computer vision, and vehicle identification, mainly for driver assistance.
With computer processing becoming exponentially powerful, it allows computers to process more complex algorithms. Data is an important element that has propelled the development of AI. In the most basic terms, without data, it will be nearly impossible to create AI products and applications.
Data analysis usually relies on two kinds of information: structured data and unstructured data. To really comprehend AI systems, it is important to recognize the key differences between these two types of data. Traditionally, structured data has been used more often than unstructured. Structured data includes simple data inputs like numerical values, dates, currencies, or addresses. Unstructured data includes data types that are more complicated to analyze, such as text, images, and video. However, the development of AI tools has made it possible to analyze more kinds of unstructured data, and the resulting analyses can then be used to make recommendations and predictions.
A key feature of artificial intelligence is that it aids machines to learn new things, rather than requiring programming to do specific to new tasks. This ideally highlights the fact that the core difference between computers of the future and those of the past is that future computers will be able to learn and self-improve.
Soon, smart virtual assistants like Google Assistant, Apple’s Siri and Amazon’s Alexa will know more about you than your family members.
Applications of Artificial Intelligence
The technology is already being applied to a variety of sectors and industries like finance, education, healthcare, retail, entertainment, travel, transportation, journalism, agriculture, government to name a few.
Greater Efficiency with Less Paperwork:
In the United States (US), JP Morgan Chase and Company introduced a machine learning program called COIN or ‘Contract Intelligence’, which eliminated over 3.6 lakh hours of work for lawyers each year, saving a huge amount of money and increasing productivity immensely. COIN uses AI to review commercial loan agreements in just seconds, performing the kinds of analyses that normally used to take a team of lawyers hundreds of hours to complete.
Another example is self-driving cars. The computer system accounts for all external data and compute it to act in a way that prevents a collision. Self-driving have been controversial as their machines tend to be designed for the lowest possible risk and the least casualties. If presented with a scenario of colliding with one person or another at the same time, these cars would calculate the option that would cause the least amount of damage.
For banks AI recognises unusual debit card usage, makes trading easier.
In the financial industry, AI is used to detect and flag activity in banking and finance such as unusual debit card usage and large account deposits—all of which help a bank's fraud department.
Applications for AI are also being used to help streamline and make trading easier. This is done by making supply, demand, and pricing of securities easier to estimate.
Autonomous tractors are a reality
Autonomous tractors are no longer just an idea. Companies like Autonomous Tractor Corporation and John Deere have been successfully developing and improving their tractors over time. The agricultural industry can benefit tremendously from the autonomous tractors, agricultural drones for spraying insecticides or nutrients along with development of new kinds of sensors, which will make it easier to perform duties that previously required human labor--such as monitoring the health and wellbeing of livestock.
WHAT IS THE FUTURE OF AI?
With the making of AI, the scope of AI is in the making of Robopets, Military Bots, clinicians, technologists and C-suite executives and working lawyers.
In the publishing world Artificial intelligence is already on a ‘disruption’ course. Right now, one can witness how AI is helping to improve comprehension of reading preferences, interfacing multi-kind books with readers, foreseeing top-rated books, creating data-driven works, and editing manuscripts with devices like ProWritingAid or Grammarly.
Another example is the plagiarism checker, which guarantees that you refer to properly researched studies and reduce the danger of any future copyright issues. In fact a recent survey directed by Future of Life Institute’s AI Impacts project predicts that AI will be fit for writing a smash hit by 2050.
WASP is an artificial intelligence software that was made by Pablos Gervás, from the Complutense University of Madrid. This researcher has gone through 17 years idealizing his robot writer. WASP has figured out how to compose, inspired by works from Spain’s Golden Age. Gervas claims that the motivation behind his exploration was to find a way to comprehend the structure of poetry and study the innovative procedure, to make an essayists’ work simpler.
Rouhiainen, Lasse. Artificial Intelligence: 101 Things You Must Know Today About Our Future. 2018
Russell, Stuart and Peter Norvig. Artificial Intelligence: A Modern Approach 3rd edition. 2016