Welcome to the newest edition of ID Tech’s AI update. Here’s the latest big news on the shifting landscape of AI and identity technology:
OpenAI may not make a profit until 2029, according to a new report from The Information. Citing financial documents, the report also notes that OpenAI may see losses reach $14 billion in 2026, with a projection showing $44 billion in losses between 2023 and 2028. Microsoft, its biggest investor, gets 20 percent of OpenAI’s revenues. It all helps to illustrate why OpenAI is trying to raise so much money.
AMD will begin mass production of its next AI chip by the end of this year, the company has announced. Vendors such as Super Micro Computer are expected to start shipping the new MI325X chip in Q1 of 2025. The chip will feature a new type of memory system that is designed to speed up AI computations.
Nvidia’s next-generation Blackwell AI processors are sold out for the next 12 months. The news came via a meeting between Nvidia executives and the investment bank Morgan Stanley, whose analyst Joseph Moore authored a report for clients. “Every indication from management is that we are still early in a long-term AI investment cycle,” he wrote. “The clear view continues to be that we are set up for an exceptionally strong 2025.”
The FIN7 cybercrime gang has been using purported deepfake nude generators to lure individuals into downloading malware. Across multiple websites, the organization has offered technology claiming to let users turn still images they upload into nude photos of the individuals depicted. Attempts to sign up for the service prompt downloads of malicious software, including ransomware.
RAND Europe is warning that deepfake technology poses a serious risk in military and strategic competition, flagging, among other things, that it could be used to essentially manufacture confrontations, or to generally sow distrust among decision-makers. The Cambridge-based think tank says deepfakes could contribute to an “erosion of traditional deterrence mechanisms” by injecting more ambiguity and uncertainty into the strategic environment.
OpenAI’s own “Influence and Cyber Operations” report, meanwhile, is a bit more sanguine. Malefactors are certainly using AI in influence campaigns, but the report asserts that most cases currently involve the generation of attention-grabbing images, rather than deceptive depictions of events that never happened, or distortions of real events.
The chatbot’s take: We asked GPT-4o for some background on one of the more tangible threats mentioned above.
–
October 11, 2024 – by Alex Perala
Follow Us