Advances in artificial intelligence (AI) have made science fiction a reality in recent years as machines have learned to emulate human thought processes and move us towards a world in which they operate independently of human intervention.
But while AI often makes us think of sentient machines, it’s used in the tech world to refer to mechanisms that solve problems or complete tasks based on a set of rules written by humans and need little other intervention.
Subsets of AI called “machine learning” and “deep learning” involve even more machine independence, but are still far from being self-aware.
The First “Intelligent” Machine
Mathematician and computer scientist Alan Turing laid the foundation for AI during the Second World War while working to establish a machine that could break the “Enigma code,” a private language used by the German forces to communicate in secret. With the development of the code-cracking Bombe Machine, Turing and his colleagues laid the foundation for later AI breakthroughs.
Turing is also responsible for creating a set of guidelines that gauge whether a machine is “intelligent.” His Turing Test is a method to determine whether a machine can converse with a human in a way that fools them into thinking they are talking to another real person. For complete AI history, read this interesting article.
The term “Artificial Intelligence.”
The term “artificial intelligence” was first used in a proposal for a workshop called the Dartmouth Summer Research Project on Artificial Intelligence in 1956. Soon after this conference, two researchers named Allen Newell and Herbert Simon continued an ambitious project to further the study of AI and machine learning.
Together, Newell and Simon developed the first universal problem-solving machine called the General Problem Solver, which performed high-end analysis, unlike any human. After that, there was no stopping the development and advancement of artificial intelligence and machine learning.
Artificial Intelligence Today
Artificial intelligence is now used in everything from recommending your next television show to reading MRIs.
From huge technology hubs to small cities and towns, AI is beginning to replace human labor and surpass our abilities to solve problems efficiently.
Here are some of the AI breakthroughs from the past year of particular interest:
In October 2018, Facebook Portal made its way into homes. Portal was designed to connect people in a way that made them feel like they were in the same room together, no matter the physical distance between them.
The device contains a smart camera that can recognize people in a video call and automatically position and reposition itself at a wider angle. Hence, users are never out of sight, even when moving around.
The smart volume adjusts itself so users can hear conversations even when moving away from the portal, but the device can be easily carried into different rooms throughout one’s home.
Smart Home Assistants
AI-powered assistants are now ubiquitous, making it possible for people to control gadgets and manage tasks via voice commands. Whether you are trying to turn up your heat or turn off the lights, smart assistants can connect to various smart devices throughout your home so you can manage objects and appliances from the comfort of your couch.
Speakers also allow home assistants like Amazon’s Alexa to play your music, bring you your news, or even call you a taxi.
While home assistants have been around for a few years, they expanded their reach and skillsets in 2018.
AI in Health Care
AI is also helping to save lives. Dozens of medical advancements and treatments have benefitted from the introduction of artificial intelligence over the last few years.
Here are just a few ways in which artificial intelligence in healthcare evolved in 2018:
AI-powered nursing assistants can significantly reduce unnecessary visits to the hospital by providing easy access to computer-generated avatars that can answer questions and address simple concerns.
These virtual AI nurses are available to respond to queries and problems 24/7 without any delay. Some can even monitor patients constantly, making sure that everything is under control as well as record information on hospital visits and patient care plans.
For example, Molly, a virtual nurse developed by Sensely, provides remote monitoring and support for patients with common and high-cost medical conditions. When hooked up to wearable devices, this AI nurse allows patients to monitor their health by showing them their weight, blood pressure, heart rate, etc.
AI has also been incorporated into nursing robots that can help alleviate some of the time pressure faced by busy nurses around the world and allow them to spend more quality time with patients.
Google’s deep learning division made great strides in building algorithms that can detect diseases, including eye diseases and breast cancer.
In 2018, Google announced that its Lymph Node Assistant (LYNA) algorithm identified breast cancer tumors with 99% percent accuracy and its retinal imaging software learned to predict risk factors for heart attacks and strokes better than any pathologist.
Pharmaceutical companies and academic researchers have also developed algorithms that can identify skin cancer better than trained dermatologists, warn patients about precancerous changes in the cervix up to 1.3 times better than standard tests, and diagnose causes of childhood blindness better than expert physicians.
The transportation industry saw some of the most significant AI milestones in 2018.
Self-Driving Cars and Buses
AI has facilitated breakthroughs in futuristic self-driving cars, trucks, buses, and taxis.
For example, Tesla announced its Autopilot project in 2018 to the great public interest. The cars operate on machine learning techniques that familiarize the software with routes, signals, road signs, and anything else that can allow the car to run with minimal human intervention.
In addition, Ericsson’s Connected Urban Transport implemented virtual drivers in shuttles around Stockholm this year, while Baidu announced that it had begun mass production of self-driving buses in China.
Back in the U.S., reactions to self-driving buses were mixed as the University of Michigan deployed a pair of self-driving shuttles on their campus to great fanfare. At the same time, Florida announced and then quickly paused an autonomous school bus project.
This year also saw Rolls-Royce’s successful test of its first autonomous ship.
With self-driving cars and buses being in the spotlight for years, the company has come up with an idea to add sea transport to the mix. They plan to roll out their ships in the next six years and tested their prototype in 2018.
The Rolls-Royce Ship intelligence kit includes an array of cameras and sensors that are placed around the ship to help track the environment. The docking procedure is also automated.
AI & Media Companies
In 2018, China’s state-run Xinhua News Agency launched the first AI television news presenter in November.
It was trained through a machine learning technique that monitored and summarized breaking news bulletins. The anchor’s first news story was about its existence:
“The development of the media industry calls for continuous innovation and deep integration with the international advanced technologies. I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted. I look forward to bringing you the brand new news experiences.”
This robot is named Zhang Zhao, and it was co-developed by Xinhua and Chinese search engine Sogou.com.
This event marked the first time a robot replaced a human newscaster. However, it signed off by saying:
“Before we go, I’d like to send my good wishes to all of the journalists across the country. As an AI anchor under development, I know there is a lot for me to improve.”
Artificial intelligence took commerce to the next level in 2018 via smart cities, cashier-less stores, and targeted marketing campaigns.
Alibaba ET Brain
Chinese retail giant Alibaba’s ET brain is an intelligent AI platform designed to solve complex social and business problems.
The project started in the city of Hangzhou in China’s Zhejiang province. It’s designed to collect and store data about cities and everyone in them in the cloud. Developers hope to train algorithms on this data to more efficiently control new smart cities. The Brain is currently working to solve traffic issues.
While the city owns the data collected, the Alibaba group provides the software. Alibaba is looking to collaborate with small companies to sell its data-gathering and machine learning resources. The project has about 120,000 developers on board, as well as 2700 academic institutions and businesses from around 77 countries.
“Every city has the potential of becoming a smart city as long as its data resource can be fully activated by City Brain,” says Wanli Min, a machine learning researcher at Alibaba Cloud.
Cashier-less Stores and AI
The idea of cashier-less stores is not new; automated checkout has been around for a few years. But some of the more sophisticated cashier-less stores are now moving towards a complete AI-powered infrastructure, where there would be minimal or no human involvement.
Companies like Amazon and Walmart have even started putting robots in charge of restocking shelves, replacing human workers. You can walk in an Amazon Go store and shop without ever dealing with a human employee.
Have you ever been surprised to find that the advertisements on your social media feed and or smart devices match your preferences exactly? That’s because ad agencies can collect and search your data and personal information and analyze it via AI algorithms. These algorithms then choose and display the most relevant products to sell you.
The idea is not new, but the execution became more sophisticated in 2018 when companies began to purchase more data that allowed them to learn more about their audiences.
AI in Education
AI has dramatically influenced the education sector by providing new tools that shift some of the workloads away from teachers so they can spend time having more meaningful interactions with students.
The usefulness of virtual classrooms is still being debated, with some commentators skeptical that they can improve learning without violating student privacy or leaving some students behind. While AI and virtual reality (VR) teachers are being actively discussed, this is a technology still under development.
2018 brought plenty of commentary from multiple sides about how this might be most responsibly developed in areas other than long-distance learning. There was only one significant step forward when the Auckland, New Zealand energy company Vector and AI company Soul Machines debuted the first digital teacher, an avatar named Will, to offer energy education to schools.
Overall, the education sector has been slow to adopt AI due to ethical concerns.
AI in Smartphones
Artificial intelligence has been in your smartphone as Google Assistant or Siri for years. But in 2018, smartphones got a massive boost with the help of AI and continue to grow beyond the ability to perform simple tasks.
In 2018, AI played a significant role in improving users’ ability to take photos with a smartphone camera. Computational photography and visual processing will eventually change the way we use cameras altogether.
Google took Google Lens to a whole new level in the 2018 year after it integrated new features.
The lens is a visual search tool initially rolled out for Pixel devices but later made available for iOS and Android. It allows the user to take a photo or view a real-time scene with a camera and learn more about the objects it recognizes.
Now users can zoom in on someone’s picture to get a closer look at their shoes or dress and then use the search feature to find out where they got the item.
The improved features also allow users to find lookalikes via Google image search integration.
AI in Pixel Phones
Google is on the cutting edge of AI when it comes to phones.
Their smart Top Shot camera feature converts your images into an HDR picture that chooses and stores the best shot in high resolution and stores the rest as low-res backups. lt also allows for super high-res zooming without a drop in the image’s quality, thanks to the AI-powered feature that maintains the image’s resolution.
The same technology went into the Night Sight mode on the Pixel 3’s camera, which analyzes all the stacked images to pick up bright spots in each picture, combine them, and produce a final bright picture even in the darkest backgrounds.
AI in Messaging
Smart messages and replies rolled out on multiple platforms in 2018, including Gmail, Google Docs, and the Android messaging app.
This technology suggests phrases and sentences that suit the context of your conversation or match previous replies. The smart software learns your most common responses and the situations in which you use them to suggest the most appropriate smart reply and save you typing time.
AI to Improve Battery Life
Artificial intelligence is even helping to improve battery life on phones.
This year, the Android team worked with Alphabet’s DeepMind to use artificial intelligence to provide customers better battery life with a new feature known as “Adaptive Battery” to keep track of your battery consumption. The program learns a user’s routines and automatically shuts down applications in the background that aren’t commonly used while also adjusting a phone’s brightness based on usage habits to preserve battery life.
The adaptive battery feature spends battery only on the most frequently used applications, reducing CPU usage by an average of 30 percent.
AI and Disaster Predictions
The planet is always in flux, exposing us to new environmental challenges, and AI is playing a pivotal role in making a once unpredictable situation a little easier to deal with.
For example, Google has debuted an AI-based experience that could help forecast the occurrence of natural disasters. The company has employed a team of AI experts to predict the aftershocks of earthquakes based on mathematical calculations from previous events. This technology could help protect people from further injury.
Human-like Speech Using AI:
Tacotron 2 is a natural text-to-speech system created by Google to generate human-like speech from text using neural networks. These neural networks are trained using speech examples and text transcripts.
Google is still working to achieve better results as it is facing noise issues along with some pronunciation problems, which will hopefully be improved by the end of 2019.
AI and machine learning techniques were game-changers in 2018. They raised some severe challenges too, such as security and privacy threats – but that’s a debate for another time.
In the end, 2018 was the year when artificial intelligence found its footing in our everyday lives.
Hi, I am very happy to be read yours wonderfull article. Thanks for sharing this wonderfull article with us and made me satisfied by knowing many topics today. Once again thanks a lot.