This article is part of the On Tech newsletter. You can sign up here to receive it on weekdays.

If Amazon’s Alexa thinks you sound sad, should that mean you’re buying a gallon of ice cream?

Joseph Turow says absolutely no way. Dr. Turow, professor at the University of Pennsylvania’s Annenberg School for Communication, researched technologies like Alexa for his new book “The Voice Catchers”. He was convinced that companies shouldn’t analyze what we say and how we sound in order to recommend products or personalize advertising messages.

Dr. Turow’s proposal is noteworthy in part because profiling people by their voices is not widespread. Or is it not yet. But he is encouraging policymakers and the public to do what I wish we had more often: be careful and think about how we deploy powerful technology before it can be used to make follow-up decisions.

After Dr. Turow had spent years researching Americans’ evolving attitudes towards our digital streams of personal data, he said that some technologies posed such a huge threat to so little upside that they should be stopped before they got big.

In this case, Dr. Turow that voice technologies like Apple’s Alexa and Siri will evolve from digital butlers to fortune tellers who use the sound of our voices to work out intimate details like our moods, desires, and medical conditions. In theory, they could one day be used by the police to determine who to arrest or by banks to say who is worthy of a mortgage.

“Using the human body to discriminate against people is something we shouldn’t be doing,” he said.

Some business settings like call centers already do this. If computers discover that you sound angry on the phone, they may direct you to operators who specialize in calming people down. Spotify has also applied for a patent on technology to recommend songs based on voice instructions about the speaker’s emotions, age, or gender. Amazon announced that its Halo bracelet and health monitoring service will analyze “Energy and Positivity in a Customer’s Voice” to encourage people to better communicate and relate.

Dr. Turow said he doesn’t want to stop using potentially helpful voice profiling applications to screen people for serious health issues, including Covid-19, for example. But there is very little benefit to us, he said, when computers use conclusions from our speech to sell us dishwashing detergent.

“We have to ban voice profiling for marketing purposes,” Dr. Turow. “There is no public benefit. We’re creating another dataset that users have no idea how to use. “

Dr. Turow has a debate about treating technologies that could have tremendous advantages, but also disadvantages that we may not see. Should the government try to set rules and regulations for high-performance technology before it becomes widespread, as it is in Europe, or leave it largely alone unless something bad happens?

The tricky part is that it’s harder to pull back features that turn out to be harmful when technologies like facial recognition software or push-button car rides prevail.

I don’t know if Dr. Turow is right to sound the alarm when our voice data is used for marketing. A few years ago there was a lot of hype that the voice would become an important way to buy and learn about new products. But no one has proven that the words we say to our things are effective predictors of what new truck we will buy.

I asked Dr. Turow asked whether people and government regulators should get upset about hypothetical risks that may never materialize. For the most part, reading our minds from our voices may not work, and we don’t really need any more things to make us feel freaky.

Dr. Turow recognized this possibility. But I agree with his point of view that it’s worth starting a public conversation about what could go wrong with language technology and working together to decide where our collective red lines are – before crossing them.

  • Mob violence accelerated by app: At least 100 new WhatsApp groups have been formed in Israel to organize violence against Palestinians, reported my colleague Sheera Frenkel. Rarely have people used WhatsApp for such targeted violence, Sheera said.

  • And when an app encourages vigilante groups: Citizen, an app that educates people about crime and dangers in the neighborhood, posted a photo of a homeless man and offered a $ 30,000 reward for information about him. He claimed he was suspected of causing wildfire in Los Angeles. The actions of the citizens helped spark a hunt for the man who the police later said was the wrong person, wrote my colleague Jenny Gross.

  • Why Many Popular TikTok Videos Have The Same Mild Vibe: This is an interesting Vox article on how the computerized app rewards the videos “in the jumbled median of all average tastes in the world”.

Here is a non-blue TikTok video with a happy horse and a couple of happy puppies.

We want to hear from you. Tell us what you think of this newsletter and what else you would like us to explore. You can reach us at ontech@nytimes.com.

If you do not have this newsletter in your inbox yet, please register here. You can also read previous On Tech columns.