The thing that separates tools like Siri and Alexa devices from AI chatbots lies in the respective infrastructure.
Voice assistants are built as “command-and-control systems,” meaning they are programmed to understand commands and produce a desired response. If you say, “What’s the weather like today?” The tool searches its database to identify what you’re asking, and offers up a brief forecast. I asked my own Alexa-enabled device a series of additional, related queries, including: “Is it nice; what’s it like today; how’s it out?”
Alexa was able to provide weather updates for all three, even though my phrasing was intentionally obtuse. However, my first awkward ask elicited a weather blurb from a local news outlet nowhere near my location. What’s notable about all of these prompts is that the device answered in basically the same format.
Chatbots, on the other hand, are trained using large-scale language models and generate unique, human-like responses. They aren’t shackled by the same constraints that keep voice assistant tools deficient at responding with relevant information consistently. As well, chatbots are far less likely to deliver four simultaneous answers that all read as copies of one another.
Voice assistants are trained to recognize a predetermined database of questions. For Siri in particular, The New York Times reports that it could take up to six weeks to simply add new phrases to the list as a result of the myriad languages supported, and the complexity of the master list itself — creating “one big snowball,” according to former Apple engineer John Burkey.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest gaming News Click Here
For the latest news and updates, follow us on Google News.