Cobo founder DiscusFish has said that the new iPhone 17 introduces a new Memory Integrity Enforcement (MIE) feature that boosts crypto wallet security. The system is designed to block advanced memory attacks during crypto wallet signing by combining hardware and software protections. Why It Matters for Crypto Users Apple shared in a September 9 blog post that MIE is powered by the A19 chip and uses Enhanced Memory Tagging Extension (EMTE), which checks memory in real-time. This setup instantly blocks common exploits such as buffer overflows and use-after-free attempts. For the crypto industry, this is important because memory flaws account for nearly 70% of all software vulnerabilities and are a common entry point for malware during wallet operations. Signing processes have always been a top target for hackers, as a single weak spot can lead to the theft of funds. Apple’s new MIE steps in by stopping these attacks at the hardware level before they can cause damage. Shutting down these threats early makes wallet signing much safer and harder for spyware to steal assets. Another benefit is that protections are always on, meaning users do not need to set up anything themselves. DiscusFish called the feature “a major win for high-net-worth crypto users and frequent signers.” Apple has also addressed side-channel risks with a function called Tag Confidentiality Enforcement (TCE), which prevents attackers from exposing memory tag values through speculative execution or other ways. This closes another pathway often used by hackers to get wallet data. The company’s security team confirmed that MIE was tested against real-world exploit chains, with most attacks stopped in their earliest stages. This reduces the opportunities for bad actors to compromise software. Additionally, the protections go beyond Apple’s native tools. Developers can also enable these features through Enhanced Security settings in Xcode, allowing crypto apps outside Apple’s ecosystem to benefit from the same defense model. iPhone 17 Sets New Standard for Wallet Safety Overall, the new iPhone 17 reduces the risk of spyware targeting private keys by combining typed memory allocators, tag checks, and confidentiality safeguards. This means that digital asset owners can reduce reliance on external hardware wallets or specialized devices for everyday signing. Elsewhere, a recent report from Web3 security firm CertiK revealed that more than $2.1 billion has already been lost to crypto-related attacks in 2025. Wallet breaches account for the bulk of these losses, with compromised apps alone responsible for $1.6 billion. The company added that this makes them the most damaging attack vector by a wide margin. The post iPhone 17’s New MIE Feature Strengthens Crypto Wallet Security appeared first on CryptoPotato .
The U.S. Trustee Program (USTP) has secured a judgment denying bankruptcy protection to a Texas man who attempted to evade more than $12.5 million in debts linked to a cryptocurrency Ponzi scheme. On August 1, the Bankruptcy Court for the Southern District of Texas entered a default judgment against Nathan Fuller, owner of Privvy Investments LLC. Fuller had filed for Chapter 7 bankruptcy in October 2024, shortly after a state court appointed a receiver to seize his assets following investor lawsuits. But federal investigators found that Fuller concealed assets, falsified documents, and lied under oath in an effort to avoid repaying creditors. Bankruptcy Court Bars Crypto Scheme Operator From Discharging $12.5M According to the USTP, Fuller used Privvy Investments to solicit funds under the guise of crypto investments, only to divert investor money for personal use. Records show that he spent heavily on luxury items and gambling trips and purchased a nearly $1 million home for his ex-wife, who was also involved in the business. Despite the separation, Fuller continued to reside at the property. U.S. Trustee Kevin Epstein, who oversees Region 7 covering the Southern District of Texas, said the ruling underscores the program’s stance against fraud. “Fraudsters seeking to whitewash their schemes will not find sanctuary in bankruptcy,” Epstein said in a statement. “The USTP remains vigilant for cases filed by dishonest debtors, who threaten the integrity of the bankruptcy system.” Investigators alleged that Fuller not only concealed extensive assets but also failed to maintain records and submitted false testimony in both his personal bankruptcy filing and the one filed on behalf of Privvy. At one point, Fuller was held in civil contempt for failing to comply with court orders. During proceedings, he admitted to operating Privvy as a Ponzi scheme, fabricating documentation, and providing false statements designed to obstruct the work of the court-appointed Chapter 7 trustee. After failing to respond to the USTP’s complaint, the court entered a default judgment in favor of the agency. As a result, Fuller remains personally liable for his debts, including more than $12.5 million in unsecured obligations listed in his filings. Creditors are now free to continue collection efforts against him. The USTP said its mission is to protect the integrity of the bankruptcy system for debtors, creditors, and the public. The agency emphasized that the outcome in Fuller’s case demonstrates its commitment to holding dishonest actors accountable. The judgment adds another chapter to the mounting scrutiny around crypto-linked investment schemes. While legitimate blockchain firms continue to raise capital and build infrastructure, fraudulent ventures such as Fuller’s highlight the risks facing investors. Earlier this year, web3 wallet infrastructure firm Privy, a separate company unrelated to Fuller’s operation, closed a $15 million funding round led by Ribbit Capital, bringing its total raised to more than $40 million. The company’s wallet-enabled stack powers projects like Hyperliquid, Farcaster, OpenSea, and Blackbird, serving over 50 million accounts across payments, DeFi, and gaming. The juxtaposition of these developments reflects a maturing crypto sector still grappling with trust issues stemming from fraud. Fraud Shadows Over Crypto Sector as Similar Cases Highlight Investor Risks In a similar case, on July 8, San Jose-based fintech firm Linqto filed for Chapter 11 bankruptcy in the Southern District of Texas, exposing deep cracks in its business model. Linqto’s Chapter 11 filing uncovers the pre-IPO illusion, as investors may not have owned shares as believed. #Linqto #Bankruptcy https://t.co/1VVgzRgi5s — Cryptonews.com (@cryptonews) July 8, 2025 Once marketed as a gateway for everyday investors to buy pre-IPO shares in tech giants like Ripple and CoreWeave, the platform now faces allegations that customers may never have actually owned the shares they believed they purchased. The company listed assets and liabilities between $500 million and $1 billion, with more than 10,000 creditors potentially affected. Chief Restructuring Officer Jeffrey Stein said “years of mismanagement” and securities law violations dating back to 2020 left Linqto possibly insolvent, further shaking confidence in retail access to private markets. @TheJusticeDept has filed a civil forfeiture complaint to recover $5M in stolen bitcoin traced to SIM swap attacks. #Bitcoin #Cybercrime #Simswap https://t.co/tlTqzTUKDr — Cryptonews.com (@cryptonews) September 9, 2025 Separately, the U.S. Department of Justice announced yesterday a civil forfeiture action to seize over $5 million in Bitcoin stolen through SIM swap attacks. Prosecutors said attackers hijacked victims’ phone numbers between October 2022 and March 2023, intercepting authentication codes to drain crypto wallets. The stolen funds were allegedly funneled through multiple wallets and eventually into an account at Stake.com, an online casino. Investigators accuse the perpetrators of using circular transactions to disguise the source of the Bitcoin before consolidation. The post Texas Ponzi Scheme Debtor Denied $12.5M Bankruptcy Protection in Crypto Case appeared first on Cryptonews .
Solana breakout shows SOL holding $200 support after a clear symmetrical-triangle breakout; heavy short liquidations (~$6.3M) pushed momentum higher and point to $260–$300 as the next resistance zone while risks
Solana’s breakout and heavy short liquidations fuel bullish momentum, but overheating risks could spark volatility.
Toncoin and Quant are two altcoins that have witnessed a surge in whale transactions recently, something that could foreshadow volatility for their prices. Toncoin & Quant Have Seen A Spike In Whale Transaction Count In a new post on X, on-chain analytics firm Santiment has talked about the latest trend in the Whale Transaction Count for two altcoins: Toncoin (TON) and Quant (QNT). This indicator measures the total amount of transfers occurring on a given network that are carrying a value of more than $100,000. Generally, only the big-money investors or “whales” are capable of making transfers this large, so the metric’s value is considered to correspond to the activity from this cohort. These holders generally carry some degree of influence in the market, so whenever they are on the move, the market itself could experience fluctuations. This can make their activity worth keeping an eye on. Related Reading: Bitcoin’s Most Resolute Diamond Hands Are Only Growing Older, Data Shows Below is the chart shared by Santiment that shows how the Whale Transaction Count has changed for Toncoin and Quant over the last few months. As is visible in the graph, the Whale Transaction Count has seen a large spike for both Toncoin and Quant recently, suggesting the whales have been active on the networks. Interestingly, despite being the much bigger network in terms of market cap, TON’s spike has only amounted to a value of 3, while QNT has observed the metric touch the 24 mark. That said, the small value that Toncoin has witnessed is still high when compared to the past. In fact, only one spike in the last three months has been compared to this one. In contrast, Quant has seen a few spikes of a similar scale. Thus, it would appear that whales just tend to be less active on TON in general. As for what the spikes could imply for the altcoins, price volatility may be coming, if the past is to go by. “Historically, large spikes in $100K+ sized moves foreshadow price direction changes,” explains the analytics firm. These changes, however, can occur in either direction. Whale Transaction Count only counts up the number of moves that the large entities are making and doesn’t contain any information about the breakdown between buy and sell moves. Related Reading: Cardano Pushes Past $0.85: Falling Wedge Breakout Confirmed? As such, it’s always hard to tell whether a spike in whale activity is bullish or bearish for the asset’s value. The whales being active on the Toncoin and Quant networks could only suggest that some sort of sharp price action may be on the horizon. TON Price At the time of writing, Toncoin is floating around $3.1, down around 1.6% over the last seven days. Featured image from Dall-E, Santiment.net, chart from TradingView.com
A widely used Bitcoin technical analysis indicator suggested that BTC is on the verge of an “explosive price expansion” toward new all-time highs.
Ripple’s new custodial partnership with Spanish banking group BBVA integrates Ripple custody into BBVA’s retail crypto trading platform, enabling secure direct custody of Bitcoin and Ethereum and strengthening Ripple’s institutional
BitcoinWorld Unlocking Predictability: Thinking Machines Lab’s Revolutionary Push for AI Consistency In the fast-paced world of technology, where even the slightest unpredictability can have significant financial implications, the quest for reliable artificial intelligence has become paramount. For those invested in cryptocurrencies and other high-stakes digital assets, the stability and accuracy of underlying AI systems, from market analysis tools to decentralized application components, are not just desirable but essential. Imagine an AI predicting market trends or executing trades; its consistency is as crucial as the security of the blockchain itself. This is precisely the frontier that Mira Murati’s highly anticipated Thinking Machines Lab is set to revolutionize. The Critical Need for Consistent AI Models For too long, the AI community has largely accepted a fundamental challenge: the inherent nondeterminism of large language models (LLMs). If you’ve ever asked ChatGPT the same question multiple times, you’ve likely received a spectrum of answers, each slightly different. While this variability can sometimes mimic human creativity, it poses a significant hurdle for applications requiring absolute precision and reliability. Consider enterprise solutions, scientific research, or even advanced financial modeling – consistent outputs are not a luxury; they are a necessity. This is where the work of Thinking Machines Lab steps in, challenging the status quo and aiming to engineer a new era of predictable and trustworthy AI models . The problem of nondeterminism manifests in several ways: Lack of Reproducibility: Researchers struggle to replicate experimental results, slowing down scientific progress. Enterprise Adoption Challenges: Businesses hesitate to deploy AI in critical functions if they cannot guarantee consistent outcomes. Debugging Difficulties: Diagnosing errors in AI systems becomes exponentially harder when outputs vary randomly. Mira Murati, formerly OpenAI’s chief technology officer, has assembled an all-star team of researchers, backed by an astounding $2 billion in seed funding. Their mission, as unveiled in their first research blog post titled “Defeating Nondeterminism in LLM Inference” on their new platform “Connectionism,” is clear: to tackle this foundational problem head-on. They believe that the randomness isn’t an unchangeable fact of AI, but a solvable engineering challenge. Decoding Nondeterminism in LLM Inference The groundbreaking research from Thinking Machines Lab , specifically detailed by researcher Horace He, delves into the technical underpinnings of this nondeterminism. He argues that the root cause lies not in the high-level algorithms but in the intricate orchestration of GPU kernels. These small programs, which run inside powerful Nvidia computer chips, are the workhorses of AI inference – the process that generates responses after you input a query into an LLM. During LLM inference , billions of calculations are performed simultaneously across numerous GPU cores. The way these kernels are scheduled, executed, and their results aggregated can introduce tiny, almost imperceptible variations. These variations, when compounded across the vast number of operations in a large model, lead to the noticeable differences in outputs we observe. Horace He’s hypothesis is that by gaining meticulous control over this low-level orchestration layer, it is possible to eliminate or significantly reduce this randomness. This isn’t just about tweaking a few parameters; it’s about fundamentally rethinking how AI computations are managed at the hardware-software interface. This approach highlights a shift in focus: From Algorithms to Orchestration: Moving beyond model architecture to the underlying computational execution. Hardware-Aware AI: Recognizing the profound impact of hardware-software interaction on model behavior. Precision Engineering: Applying rigorous engineering principles to AI inference processes. This level of control could unlock unprecedented reliability, making AI systems behave more like traditional deterministic software, where the same input always yields the same output. Why AI Consistency is a Game-Changer for Innovation The implications of achieving true AI consistency are vast and transformative, extending far beyond simply getting the same answer twice from ChatGPT. For enterprises, it means building trust in AI-powered applications, from customer service chatbots that always provide uniform information to automated financial analysis tools that generate identical reports given the same data. Imagine the confidence businesses would have in deploying AI for critical decision-making processes if they could guarantee reproducible outcomes. In the scientific community, the ability to generate reproducible AI responses is nothing short of revolutionary. Scientific progress relies heavily on the ability to replicate experiments and verify results. If AI models are used for data analysis, simulation, or hypothesis generation, their outputs must be consistent for findings to be considered credible and build upon. Horace He further notes that this consistency could dramatically improve reinforcement learning (RL) training. RL is a powerful method where AI models learn by receiving rewards for correct actions. However, if the AI’s responses are constantly shifting, the reward signals become noisy, making the learning process inefficient and prolonged. Smoother, more consistent responses would lead to: Faster Training: Clearer reward signals accelerate the learning curve. More Robust Models: Training on consistent data leads to more stable and reliable AI. Reduced Data Noise: Eliminating variability in responses cleans up the training data, improving overall model quality. The Information previously reported that Thinking Machines Lab plans to leverage RL to customize AI models for businesses. This suggests a direct link between their current research into consistency and their future product offerings, aiming to deliver highly reliable, tailor-made AI solutions for various industries. Such developments could profoundly impact sectors ranging from healthcare and manufacturing to finance and logistics, where precision and reliability are paramount. Thinking Machines Lab: A New Era of Reproducible AI The launch of their research blog, “Connectionism,” signals Thinking Machines Lab ‘s commitment to transparency and open research, a refreshing stance in an increasingly secretive AI landscape. This inaugural post, part of an effort to “benefit the public, but also improve our own research culture,” echoes the early ideals of organizations like OpenAI. However, as OpenAI grew, its commitment to open research seemingly diminished. The tech world will be watching closely to see if Murati’s lab can maintain this ethos while navigating the pressures of a $12 billion valuation and the competitive AI market. Murati herself indicated in July that the lab’s first product would be unveiled in the coming months, designed to be “useful for researchers and startups developing custom models.” While it remains speculative whether this initial product will directly incorporate the techniques from their nondeterminism research, the focus on foundational problems suggests a long-term vision. By tackling core issues like reproducibility, Thinking Machines Lab is not just building new applications; it’s laying the groundwork for a more stable and trustworthy AI ecosystem. The journey to create truly reproducible AI is ambitious, but if successful, it could solidify Thinking Machines Lab’s position as a leader at the frontier of AI research, setting new standards for reliability and paving the way for a new generation of dependable intelligent systems. The Road Ahead: Challenges and Opportunities for Thinking Machines Lab The venture of Thinking Machines Lab is not without its challenges. Operating with a $12 billion valuation brings immense pressure to deliver not just groundbreaking research but also commercially viable products. The technical hurdles in precisely controlling GPU kernel orchestration are formidable, requiring deep expertise in both hardware and software. Furthermore, the broader AI community’s long-standing acceptance of nondeterminism means that TML is effectively challenging a deeply ingrained paradigm. Success will require not only solving the technical problem but also demonstrating its practical benefits convincingly to a global audience. However, the opportunities are equally immense. By solving the problem of AI consistency, Thinking Machines Lab could become the standard-bearer for reliable AI, attracting partners and customers across every industry. Their commitment to sharing research publicly, through platforms like Connectionism, could foster a collaborative environment, accelerating innovation across the entire AI ecosystem. If they can successfully integrate their research into products that make AI models more predictable, they will not only justify their valuation but also fundamentally alter how businesses and scientists interact with artificial intelligence, making it a more dependable and indispensable tool for progress. In conclusion, Thinking Machines Lab’s bold foray into defeating nondeterminism in LLM inference represents a pivotal moment in AI development. By striving for greater AI consistency , Mira Murati and her team are addressing a core limitation that has hindered broader AI adoption in critical sectors. Their focus on the intricate details of GPU kernel orchestration demonstrates a profound commitment to foundational research, promising a future where AI models are not just powerful but also reliably predictable. This endeavor has the potential to unlock new levels of trust and utility for artificial intelligence, making it a truly revolutionary force across all industries, including the dynamic world of digital assets and blockchain technology. To learn more about the latest AI models trends, explore our article on key developments shaping AI features. This post Unlocking Predictability: Thinking Machines Lab’s Revolutionary Push for AI Consistency first appeared on BitcoinWorld and is written by Editorial Team
The bug impacted some remote procedure call (RPC) nodes, causing them to fall out of sync, but did not impact onchain block production.
Qualcomm’s driverless tech, co-developed with BMW, is likely to draw licensing interests from various other automakers. Chief Executive Cristiano Amon told CNBC the rollout will show how the system behaves on public roads and could trigger a run of deals. The move highlights how the US-based chip company is expanding beyond smartphones into cars, one of its rapidly-growing businesses. Last week, BMW and Qualcomm announced a driving package built on Qualcomm’s semiconductors. The product is called ‘Snapdragon Ride Pilot Automated Driving System’, and it is a driver-assistance feature rather than a fully driverless system. It supports driving hands-free on some roads and can execute lane changes. It does not make the car fully driverless or autonomous in traffic. The first vehicle to use it will be the BMW iX3. The auto firms say the feature will become available by 2026 in 100 countries. Qualcomm says it was designed from the start to be licensed to carmakers other than BMW. In a Tuesday interview, Amon said the BMW launch will give shoppers and rivals a clear look at performance. “Everybody’s been waiting for this moment, including ourselves, because people wanted to see how it performs in the street,” he said. He added that the iX3 will ship with the tech in about 60 countries, creating a large, real-world demo. “I think what I expect to happen, as OEMs see how it compares and how competitive it is, that’s going to ignite a domino effect” of carmakers looking to integrate this technology. Amon said the company has “made a lot of progress” in talks with other manufacturers but is “not yet ready to announce” any new partnerships. The approach fits a broader shift at Qualcomm The company still makes most of its money from chips installed on smartphones from vendors including Xiaomi and Samsung. But it is pushing into different industries, including PC processors, semiconductors for data centers, and the automotive industry. The auto unit is a central bet. It generated almost $1 billion by the June quarter while growing 21% since the previous year. Qualcomm has said it expects revenue from the automotive industry to reach $8 billion in 2029. To hit that target, the company is developing technology for many parts of the car. Its chips could power systems like in-car entertainment and others. On Monday, Qualcomm announced that it partnered with Google Cloud to allow automakers to develop digital assistants of their own. Analysts say the strategy is to offer a complete stack. “[Qualcomm] are building a whole ecosystem led by software,” said Murtuza Ali, senior analyst at Counterpoint Research. “The main thing is they are a fully integrated solution provider for autonomy, which is what they were lacking.” Traditional carmakers, especially in Europe, are often seen as behind on software-driven features such as autonomous driving when compared with rivals from China. That gap matters as more models add advanced driver assistance and as brands try to keep customers loyal. The electric-vehicle market in the US is also shifting Tesla’s shares have dropped to their 8-year lows in August as buyers chose EVs from a growing set of competitors over the aging lineup sold by CEO Elon Musk’s company, based on information shared with Reuters . The slide comes as rivals step up incentives during a difficult stretch for the EV industry. According to analysts, the sales of electric vehicles in the US would continue at an increased pace in September and drop later after the expiration of federal tax credits. Tesla once controlled over 80% of the EV industry in the US. In August, it accounted for 38% of total EV sales, according to early Cox data. It was the first time the company fell below its 40% mark since October 2027. At that time, the company had ramped up the Model 3 production, the first mass-produced car for the market. Even in Europe, BMW and Mercedes are challenging Tesla’s market share, as reported earlier by Cryptopolitan . Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites