Voice Control: How Ford’s Innovative SYNC® System Understands What You’re Saying

  • SYNC® is an in-vehicle communications and entertainment system which is continuously developed and updated to increase your productivity and connectivity in your Ford.

  • Voice command allows a driver to control the different functions of SYNC® by pressing a button on the steering wheel and dictating a command prompt.

  • The language model and decoder software is how SYNC®’s voice command recognizes commands in different languages and accents.

  • User generated data from SYNC® 3’s over the air diagnostics and analytics help engineers tailor updates on streamlining the operation of voice command.

When Ford launched its innovative and intuitive SYNC® system 13 years ago, voice-controlled features revolutionised the way we interacted with our vehicles. SYNC® 3 has grown and evolved to support 25 languages, allowing more and more people globally to enjoy its features.

Leading that development is Ford’s Core Speech Technology team based in Dearborn, Michigan. The team is headed by Yvonne Gloria who has been involved with SYNC® innovation since SYNC® 3 was announced in 2014. A software engineer by trade, Gloria says simplicity is the key to SYNC’s success.

“Not all users of our software are engineers,” she said. “Just because I developed the software to a specific task, the customer shouldn’t be forced to see it that way. This led me to studying how people use computers and learn software, which made me think more like a customer rather than an engineer.”

How does SYNC® know what I’m saying?
From SYNC’s rudimentary origins in 2007, SYNC® 3 has evolved to become one of the most intuitive and innovative voice-activated systems available.

At the core of SYNC®’s voice-activated system is a speech engine which acts like the speech recognition brain. A language model and decoder software within the speech engine breaks down, analyses and understands each spoken command.

The language model is a vast list of words or commands which is paired with a specific task. For example, the command, “Call John Doe” will be listed in over 25 languages which the SYNC® system caters to. The large catalogue of commands corresponding to the voice-activated features within SYNC® are all listed in the language model.

The decoder software takes sound characteristics of each command and matches it with the list of words in the language model. Using the same example, when “Call John Doe” is said, the decoder analyses the different sound characteristics that is spoken into the system. It will then find a similar set of characteristics within the language model.

Different accents from different regions of a country are taken into consideration also.

User centric development
Constant evolution has helped Core Speech Technology’s engineers refine and expand SYNC®’s functionality. By analysing the ways customers use SYNC®, engineers are able to make the system more intuitive either by streamlining tasks or by making them easier to access. Through constant refinement, the team has been able to make more than 80 per cent of SYNC’s voice commands a single step process.

With over-the-air diagnostics and analytics on SYNC® 3, engineers can get a steady flow of voice recorded data showing how customers walk through SYNC® for different tasks. Engineers detect common errors that users encounter and help streamline tasks, rather than just leaving users to figure it out for themselves. Data is collected with the permission of users.

“It’s a never-ending activity once the program starts, until it goes to the end of its life cycle, because you’re constantly taking market feedback to create updates further down the road,” said Stephen Cooper, Voice Recognition Features Lead – SYNC® 3.

The Future of Voice Commands
There is still a lot of potential for SYNC-equipped vehicles to enhance your driving experience, Gloria explained. “As technology moves forward and gets better, and as buttons are eliminated in place of bigger and more prevalent screens in the vehicle, voice command technology has a big part to play in the future.”

For Cooper, the potential to eliminate the hazards of distracted driving is key. “The number of accidents I have seen that have been caused by someone being distracted while driving was one of the drivers that made me go into voice recognition. Reducing distractions as much as possible and making things easier to operate will hopefully keep the drivers safe.”

Watch Videos
Select Language

© Sime Darby Auto Connexion Sdn Bhd
Ooops!
Generic Popup