Alexa+ is the new iteration of that monster (which I don't, or have ever, use), and the linked report is sorta, kinda first look at what it does, and a bit of how it does it.
The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other A.I. voice assistants I've tried, but was noticeably worse than the original Alexa at some basic tasks.Ooops.
So, what's the problem?
The old Alexa, [Daniel Rausch, Amazon VP] said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room — all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by one.Ain't inference grand? Not always. But, by merging (what is likely in some RDBMS) ground facts with probable guesses, with the ground facts taken precedence, whoever gets there first will have the winner. The remaining question, as always, is figuring out the most efficient merge protocol? Read the ground facts, and prohibit the innterTubes scrape from replacing any, or run the innterTubes scrape and then replace the horseshit with ground facts? Only The Shadow knows.
[my emphasis]
Adding generative A.I. to Alexa forced Amazon to rebuild many of these processes, Mr. Rausch said. Large language models, he said, are "stochastic," meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative, but less reliable.
The datastore for vanila Alexa is DynamoDB, which is promoted as NoSQL, but as you can read, it manages to maintain its own version of ACID.
Now, whether, and if so to what extent, Alexa+ deviates (in a non-consistency way) from Alex's use of DynamoDB wasn't covered in the report.
No comments:
Post a Comment