1. Singapore-based Grab plans to invest $150 million into AI, including improvements to its natural language processing (NLP) tech, as it seeks to develop a regional super app. In its latest funding round, the ride-hailing and food delivery startup is targeting $4.5 billion from investors including SoftBank Group Corp.’s Vision Fun. The company, which is valued at $14 billion, has 2,000 engineers, including 300 in AI-related jobs. According to co-founder Tan Hooi Ling, Grab is working with Microsoft to improve its NLP as it works to further develop a "super app" that would allow users to do everything from ride-hailing, to food delivery, digital payments, and various forms of communication, which would be similar to WeChat. - BLOOMBERG
2. As part of its recent $170 million settlement with the Federal Trade Commission over children’s privacy violations, YouTube says it will use AI to identify videos aimed at kids so it can prevent them from being paired with targeted ads. As part of the deal announced on Wednesday, the Google unit will stop selling personalized ads on videos aimed at children after it illegally harvested personal data and targeted kids with ads, making millions in profit, regulators say. The FTC and New York’s attorney general accused YouTube of violating the federal Children’s Online Privacy Protection Act. Critics have noted that Google's machine-learning software, which will work to identify content intended for children, has come under scrutiny over its past failures, including when thousands of videos of the March New Zealand terrorist attack appeared on YouTube. - BLOOMBERG
3. SenseTime Group Ltd., which now has a valuation of over $7.5 billion, is in no rush to go public, according to its CEO. Xu Li made the remarks during a Bloomberg conference in Singapore on Thursday, days after rival Megvii filed for its own IPO in Hong Kong. After rounds of financing, Alibaba-backed Sensetime now has total funding of more than $3 billion. Xu told the conference that Sensetime doesn’t own nor access customer data and doesn't work directly with China's government, adding that as a leading company, "we should have the responsibility to collaborate with the government and regulator to come up with regulations" for AI ethics. The company is currently developing an AI training chip, mainly in-house but also via startups, and remains cash-flow-negative despite triple-digit-percentage growth in revenue. - KR-ASIA
4. Developers at Google have created a new framework to build more accurate models for machine vision, language translation, and predictive analytics. The Neural Structured Learning (NSL) technique is used to capture “structured signals,” which represent the connections or similarities among labeled and unlabeled data samples in neural network training. The process boosts model accuracy, especially when labeled data is lacking. NSL allows TensorFlow users to easily incorporate various structured signals for training neural networks and can be used for supervised, semi-supervised and unsupervised learning. - DATANAMI
A version of this story first appeared in Inside CTO/CIO.
5. New research supports the idea that facial recognition algorithms make terrible truth detectors and are bad at reading emotions. Jonathan Gratch, director for virtual human research at the USC Institute for Creative Technologies, and colleagues recently presented their findings at the International Conference on Affective Computing and Intelligent Interaction in Cambridge, England. The research, which used computer vision techniques to analyze facial expressions, showed that people smile for many reasons, not just happiness, underscoring the idea that people's facial expressions don't often match what they are truly feeling or thinking. This has implications in the use of facial recognition and other AI technology. "Think about how people used polygraphs back in the day to see if people were lying," Gratch says. "There were misuses of the technology then, just like misuses of facial expression technology today." - INTERESTING ENGINEERING
6. A team from the University of Oxford developed a facial recognition algorithm that detects, tracks, and recognizes chimpanzees in video footage. The researchers trained the AI on 50 hours of footage of 23 chimpanzees in Guinea, West Africa, which produced 10 million facial images. According to their study published in Science Advances, the AI correctly identified an animal’s sex 96 percent of the time and had an overall identity recognition accuracy of 92 percent. When compared to people, who were asked to identify different chimps in a test, the AI took 30 seconds and had an accuracy of 84 percent - compared to 55 minutes and a 42 percent accuracy for humans. The AI can also work on other primates, researchers said. - NEW SCIENTIST
7. The music tech startup Musiio, which uses AI to analyze music catalogs, announced its first commercial client, Audio Network. The music-production firm, which creates content for the entertainment industry, will utilize Musiio’s technology to speed up the ability for clients to comb through Audio Network’s catalog, which contains more than 170,000 tracks. The AI will be incorporated as an added interface to AN's existing search platform, which also relies on people to curate music. - THE INDUSTRY OBSERVER
8. Great Wolf Lodge, a chain of indoor water park resort hotels, is developing an AI that can automatically scan guest comments. The company's GAIL AI is intended to better pinpoint guest sentiment and reduce or eliminate the need for employees to scan through social media and review websites, which can take many hours. The effort is part of the company's larger digital strategy that will utilize cloud and SaaS technologies, new property management and CRM systems. - CIO
9. Cogito has closed a $20 million funding round, with participation from New York Life Ventures, Goldman Sachs Ventures, and Salesforce Ventures. The company, which is working on technology to better detect PTSD, plans to use the cash infusion to expand its Coaching AI system and grow its emotion-detecting AI. - VENTURE BEAT
10. AI may not destroy humanity, but it has the potential to make us boring, writes University of Rochester professor Adam Frank in an opinion piece for NBC's THINK. Frank argues that the concept of “computational theory of mind" — that we are essentially computers with neural circuits and nothing more — is only a philosophy and not scientific truth, but the tech industry doesn't abide by that and will fashion AI to fine-tune our reality, especially through trying to predict (often incorrectly) our internal emotional state. He notes, "If we’re not careful, we may all find ourselves living in a world of unconscious machines destined to make us less human by boxing us into an existence that fits their algorithms." - NBC NEWS
Written and curated by Beth Duckett in Orange County. Beth is a former reporter for The Arizona Republic who has written for USA Today, Get Out magazine and other publications. Follow her tweets about breaking news and other topics in southern California here.
Editor: Kim Lyons (Pittsburgh-based journalist and managing editor at Inside).