Welcome to the newest edition of FindBiometrics’ AI update. Here’s the latest big news on the shifting landscape of AI and identity technology:
The U.S.’s AI-focused sanctions on China are working, at least to some extent. According to a new report from The Information, Huawei has been struggling in recent weeks to ramp up production of its Ascend 910B AI chip, largely because of restrictions on what kind of chipmaking equipment American firms can export to China.
OpenAI has informed Chinese users that it will block access to its AI tools in the country starting in July. It wasn’t officially available in China anyway, but users had found workarounds that OpenAI will soon try to disrupt. Local tech companies like Alibaba and Baidu are now scrambling to lure developers and other users to their own platforms.
Stability AI has a new CEO and has been saved from financial instability, at least for now. Prem Akkaraju, its new chief executive, is the former head of a visual effects company. Stability AI is well known for its Stable Diffusion image generation engine, but has lately hit rocky waters, in financial terms. Facebook President Sean Parker, among others, has come to the rescue with investment funding.
SoftBank is investing $10-20 million in Perplexity AI, at a $3 billion valuation. The latter is less than a year old, and developing an AI-powered search engine to rival Google. SoftBank has been making big bets on AI; this one comes by way of its Vision Fund 2.
OpenAI has delayed the launch of its much-touted voice assistant, citing safety concerns. It was meant to roll out this month, but has been pushed back to the fall. OpenAI has recently seen a number of defections from AI safety researchers unhappy with its allegedly reckless technological advancement, including co-founder Ilya Sutskever, who announced his own AI company last week.
AI deepfakes are being used more for disinformation than for fraud or any other applications of the tech, according to research from Google’s DeepMind AI arm. A new study from DeepMind’s Jigsaw research group asserts that the most common goal of malicious deepfakes is influencing public opinion.
The chatbot’s take: We wanted a little more background on OpenAI’s sudden shift on China. Fact-check: ChatGPT is speculating a bit with this answer.
–
June 28, 2024 – by Alex Perala
Follow Us