Web3

Home Web3 Page 41

Chill Guy Meme Coin Pumps Another 50% as Creator Fights Back – Decrypt

0
Chill Guy Meme Coin Pumps Another 50% as Creator Fights Back – Decrypt



From TikTok trends to crypto wallets, the “Chill Guy” meme has become an internet phenomenon, turning a laid-back cartoon dog into the face of a million-dollar market cap crypto.

Since its November 15 launch, the Chill Guy meme coin ($CHILLGUY) ballooned from a $10 million market cap to over $461 million, driven by the widespread popularity of the Chill Guy character—a relaxed anthropomorphic dog in a grey sweater, blue jeans, and red sneakers.

The Solana-based meme coin has increased in value by 50% over the last 24 hours alone, trading just shy of $0.50, per CoinGecko data. The token’s rise reflects the ongoing craze around meme coins, which continue to defy market norms with their volatile yet lucrative returns.

CHILLGUY features an anthropomorphic brown dog sporting a grey sweater, rolled-up jeans, and red sneakers, captivating audiences with its laid-back demeanor and has become a cultural phenomenon. 

Frequently paired with humorous captions on platforms like TikTok, the character embodies a carefree attitude, resonating particularly with Gen Z audiences.

However, the coin’s ascent has not been without controversy. Behind the meme coin’s success lies growing tension as the meme’s creator, Philip Banks, pushes back against what he calls unauthorized exploitation of his work.

“Just putting it out there, Chill Guy has been copyrighted. Like, legally. I’ll be issuing takedowns on for-profit related things over the next few days,” Banks tweeted last week. 

While Banks clarified that casual use by brands or individuals isn’t his target—“I just ask for credit. Or Xboxes.”—he noted unauthorized merchandise and shitcoins are crossing the line.

Despite these concerns, early adopters of CHILLGUY have seen massive returns, with one trader turning a $1,000 investment into over $1 million within days.

It isn’t the first time meme coins have demonstrated their ability to convert internet phenomena into financial windfalls. 

Recently, the Peanut the Squirrel (PNUT) token—inspired by the viral story of Peanut, a pet squirrel euthanized by New York authorities—reached a $1 billion market cap within two weeks, while the First Convicted Raccoon (FRED) coin climbed 383% in a day.

Edited by Sebastian Sinclair

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

5 AI Projects to Build This Weekend Using Python: From Beginner to Adv

0
5 AI Projects to Build This Weekend Using Python: From Beginner to Adv


Getting hands-on with real-world AI projects is the best way to level up your skills. But knowing where to start can be challenging, especially if you’re new to AI. Here, we break down five exciting AI projects you can implement over the weekend with Python—categorized from beginner to advanced. Each project uses a problem-first approach to create tools with real-world applications, offering a meaningful way to build your skills.

1. Job Application Resume Optimizer (Beginner)

Updating your resume for different job descriptions can be time-consuming. This project aims to automate the process by using AI to customize your resume based on job requirements, helping you better match recruiters’ expectations.

Steps to Implement:

Convert Your Resume to Markdown: Begin by creating a simple markdown version of your resume.

Generate a Prompt: Create a prompt that will input your markdown resume and the job description and output an updated resume.

Integrate OpenAI API: Use the OpenAI API to adjust your resume dynamically based on the job description.

Convert to PDF: Use markdown and pdfkit libraries to transform the updated markdown resume into a PDF.

Libraries: openai, markdown, pdfkit

Code Example:

import openai
import pdfkit

openai.api_key = “your_openai_api_key”

def generate_resume(md_resume, job_description):
prompt = f”””
Adapt my resume in Markdown format to better match the job description below. \
Tailor my skills and experiences to align with the role, emphasizing relevant \
qualifications while maintaining a professional tone.

Resume in Markdown:
{md_resume}

Job Description:
{job_description}

Please return the updated resume in Markdown format.
“””

response = openai.Completion.create(
model=“gpt-3.5-turbo”,
messages=[{“role”: “user”, “content”: prompt}]
)

return response.choices[0].text

md_resume = “Your markdown resume content here.”
job_description = “Job description content here.”

updated_resume_md = generate_resume(md_resume, job_description)

pdfkit.from_string(updated_resume_md, “optimized_resume.pdf”)

This project can be expanded to allow batch processing for multiple job descriptions, making it highly scalable.

2. YouTube Video Summarizer (Beginner)

Many of us save videos to watch later, but rarely find the time to get back to them. A YouTube summarizer can automatically generate summaries of educational or technical videos, giving you the key points without the full watch time.

Steps to Implement:

Extract Video ID: Use regex to extract the video ID from a YouTube link.

Get Transcript: Use youtube-transcript-api to retrieve the transcript of the video.

Summarize Using GPT-3: Pass the transcript into OpenAI’s API to generate a concise summary.

Libraries: openai, youtube-transcript-api, re

Code Example:

import re
import openai
from youtube_transcript_api import YouTubeTranscriptApi

openai.api_key = “your_openai_api_key”

def extract_video_id(youtube_url):
match = re.search(r'(?:v=|\/)([0-9A-Za-z_-]{11}).*’, youtube_url)
return match.group(1) if match else None

def get_video_transcript(video_id):
transcript = YouTubeTranscriptApi.get_transcript(video_id)
transcript_text = ‘ ‘.join([entry[‘text’] for entry in transcript])
return transcript_text

def summarize_transcript(transcript):
response = openai.Completion.create(
model=“gpt-3.5-turbo”,
messages=[{“role”: “user”, “content”: f”Summarize the following transcript:\n{transcript}}]
)
return response.choices[0].text

youtube_url = “https://www.youtube.com/watch?v=example”
video_id = extract_video_id(youtube_url)
transcript = get_video_transcript(video_id)
summary = summarize_transcript(transcript)

print(“Summary:”, summary)

With this tool, you can instantly create summaries for a collection of videos, saving valuable time.

3. Automatic PDF Organizer by Topic (Intermediate)

If you have a collection of research papers or other PDFs, organizing them by topic can be incredibly useful. In this project, we’ll use AI to read each paper, identify its subject, and cluster similar documents together.

Steps to Implement:

Read PDF Content: Extract text from the PDF’s abstract using PyMuPDF.

Generate Embeddings: Use sentence-transformers to convert abstracts into embeddings.

Cluster with K-Means: Use sklearn to group documents based on their similarity.

Organize Files: Move documents into folders based on their clusters.

Libraries: PyMuPDF, sentence_transformers, pandas, sklearn

Code Example:

import fitz
from sentence_transformers import SentenceTransformer
from sklearn.cluster import KMeans
import os
import shutil

model = SentenceTransformer(‘all-MiniLM-L6-v2’)

def extract_abstract(pdf_path):
pdf_document = fitz.open(pdf_path)
abstract = pdf_document[0].get_text(“text”)[:500]
pdf_document.close()
return abstract

pdf_paths = [“path/to/pdf1.pdf”, “path/to/pdf2.pdf”]
abstracts = [extract_abstract(pdf) for pdf in pdf_paths]
embeddings = model.encode(abstracts)

kmeans = KMeans(n_clusters=3)
labels = kmeans.fit_predict(embeddings)

for i, pdf_path in enumerate(pdf_paths):
folder_name = f”Cluster_{labels[i]}
os.makedirs(folder_name, exist_ok=True)
shutil.move(pdf_path, os.path.join(folder_name, os.path.basename(pdf_path)))

This organizer can be customized to analyze entire libraries of documents, making it an efficient tool for anyone managing large digital archives.

4. Multimodal Document Search Tool (Intermediate)

Key information may be embedded in both text and images in technical documents. This project uses a multimodal model to enable searching for information within text and visual data.

Steps to Implement:

Extract Text and Images: Use PyMuPDF to extract text and images from each PDF section.

Generate Embeddings: Use a multimodal model to encode text and images.

Cosine Similarity for Search: Match user queries with document embeddings based on similarity scores.

Libraries: PyMuPDF, sentence_transformers, sklearn

Code Example:

import fitz
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity

model = SentenceTransformer(‘clip-ViT-B-32’)

def extract_text_and_images(pdf_path):
pdf_document = fitz.open(pdf_path)
chunks = []
for page_num in range(len(pdf_document)):
page = pdf_document[page_num]
chunks.append(page.get_text(“text”)[:500])
for img in page.get_images(full=True):
chunks.append(“image_placeholder”)
pdf_document.close()
return chunks

def search_query(query, documents):
query_embedding = model.encode(query)
doc_embeddings = model.encode(documents)
similarities = cosine_similarity([query_embedding], doc_embeddings)
return similarities

pdf_path = “path/to/document.pdf”
document_chunks = extract_text_and_images(pdf_path)
similarities = search_query(“User’s search query here”, document_chunks)
print(“Top matching sections:”, similarities.argsort()[::-1][:3])

This multimodal search tool makes it easier to sift through complex documents by combining text and visual information into a shared search index.

5. Advanced Document QA System (Advanced)

Building on the previous project, this system allows users to ask questions about documents and get concise answers. We use document embeddings to find relevant information and a user interface to make it interactive.

Steps to Implement:

Chunk and Embed: Extract and embed each document’s content.

Create Search + QA System: Use embeddings for search and integrate with OpenAI’s API for question-answering.

Build an Interface with Gradio: Set up a simple Gradio UI for users to input queries and receive answers.

Libraries: PyMuPDF, sentence_transformers, openai, gradio

Code Example:

import gradio as gr
import openai
from sentence_transformers import SentenceTransformer

model = SentenceTransformer(“all-MiniLM-L6-v2”)

def generate_response(message, history):
response = openai.Completion.create(
model=“gpt-3.5-turbo”,

messages=[{“role”: “user”, “content”: message}]
)
return response.choices[0].text

demo = gr.ChatInterface(
fn=generate_response,
examples=[{“text”: “Explain this document section”}]
)

demo.launch()

This interactive QA system, using Gradio, brings conversational AI to documents, enabling users to ask questions and receive relevant answers.

These weekend AI projects offer practical applications for different skill levels. From resume optimization to advanced document QA, these projects empower you to build AI solutions that solve everyday problems, sharpen your skills, and create impressive additions to your portfolio.



Source link

The Rise of Web3: How Blockchain Technology is Revolutionizing the Internet

The internet has become an integral part of our daily lives, from communication and entertainment to shopping and banking. However, with the rise of centralized platforms and data breaches, concerns about privacy and security have also increased. This has led to the emergence of Web3, a new era of the internet that is powered by blockchain technology.

Web3, also known as the decentralized web, is a term used to describe the next generation of the internet. It is built on the principles of decentralization, transparency, and security, and aims to give users more control over their data and online interactions. This is made possible through the use of blockchain technology, which is the backbone of Web3.

Blockchain technology is a decentralized ledger that records and stores data in a secure and transparent manner. It is the technology behind cryptocurrencies like Bitcoin and Ethereum, but its potential goes far beyond just digital currencies. With its decentralized nature, blockchain eliminates the need for intermediaries, making it a more efficient and secure way to store and transfer data.

One of the key innovations of Web3 is the concept of self-sovereign identity. In the current internet landscape, users have to rely on centralized platforms to store and manage their personal information. This puts their data at risk of being hacked or misused. With Web3, users have full control over their identity and can choose what information they want to share with different platforms. This not only enhances privacy but also reduces the risk of identity theft.

Another major innovation of Web3 is the concept of decentralized applications (DApps). These are applications that run on a decentralized network, rather than a single server. This makes them more resilient to cyber attacks and censorship, as there is no central point of failure. DApps also offer a more transparent and fair system, as they are governed by smart contracts that are executed automatically without the need for intermediaries.

Web3 is also revolutionizing the way we handle financial transactions. With the use of blockchain technology, payments can be made directly between two parties without the need for a bank or other financial institution. This not only reduces transaction fees but also eliminates the risk of fraud and chargebacks. Additionally, blockchain-based payment systems are faster and more efficient, as they operate 24/7 and can process transactions in a matter of seconds.

The rise of Web3 has also given birth to the concept of decentralized finance (DeFi). DeFi refers to financial services that are built on blockchain technology, such as lending, borrowing, and trading. These services operate without the need for intermediaries, making them more accessible and affordable for users. DeFi has the potential to disrupt traditional financial systems, as it offers a more inclusive and transparent alternative.

Web3 is not just limited to financial and identity-related innovations. It is also transforming the way we interact with content on the internet. With the rise of Web3, content creators can now monetize their work directly without relying on advertising revenue or third-party platforms. This is made possible through the use of blockchain-based content platforms, where users can pay for access to premium content using cryptocurrencies.

In conclusion, Web3 is revolutionizing the internet in ways we could have never imagined. Its decentralized and transparent nature is paving the way for a more secure, fair, and inclusive online world. With the rise of Web3, we can expect to see more innovations and advancements in various industries, from finance and identity management to content creation and beyond. It is an exciting time to be a part of this technological revolution, and the potential for a better internet is endless.

Exploring the Potential of Web3: A Look into Decentralized Applications and Smart Contracts

write a articles 1000 word and generate image about this word
The internet has revolutionized the way we live, work, and communicate. With the rise of Web 2.0, we saw the emergence of social media, e-commerce, and other interactive platforms that have transformed the digital landscape. However, as technology continues to advance, we are now entering a new era of the internet – Web3. This next generation of the internet is set to bring about even more significant changes, particularly in the realm of decentralized applications and smart contracts.

Web3, also known as the decentralized web, is a term used to describe the evolution of the internet from a centralized system to a decentralized one. This means that instead of relying on a central authority, Web3 operates on a peer-to-peer network, where users can interact directly with each other without the need for intermediaries. This shift towards decentralization has been made possible by the development of blockchain technology.

Blockchain technology is the backbone of Web3, and it is what enables the creation of decentralized applications (DApps) and smart contracts. DApps are applications that run on a decentralized network, making them resistant to censorship and tampering. These applications are built on blockchain technology, which ensures that all data and transactions are transparent and immutable. This means that DApps can provide a level of security and trust that traditional centralized applications cannot.

One of the most significant advantages of DApps is their potential to disrupt traditional industries. For example, in the financial sector, DApps can provide an alternative to traditional banking systems by allowing for peer-to-peer transactions without the need for intermediaries. This not only reduces transaction fees but also eliminates the risk of fraud and manipulation. Similarly, in the healthcare industry, DApps can improve the security and privacy of patient data by storing it on a decentralized network, making it less vulnerable to cyber attacks.

Another crucial aspect of Web3 is smart contracts. Smart contracts are self-executing contracts that are coded on a blockchain network. These contracts can automatically enforce the terms and conditions agreed upon by the parties involved, without the need for intermediaries. This not only reduces the time and cost of contract execution but also eliminates the potential for human error. Smart contracts have the potential to revolutionize various industries, from supply chain management to real estate.

One of the most significant developments in the world of Web3 is the creation of decentralized autonomous organizations (DAOs). DAOs are organizations that operate on a decentralized network, with decisions being made through a consensus of its members. These organizations have no central authority, and all decisions are transparent and immutable, making them resistant to corruption. DAOs have the potential to transform the way businesses are run, as they provide a more democratic and decentralized approach to decision-making.

However, as with any new technology, Web3 also has its challenges. One of the main obstacles is the lack of user-friendly interfaces and adoption. While blockchain technology and DApps have been around for some time, they are still relatively new to the general public. This means that there is a learning curve for users to understand and navigate these decentralized networks. Additionally, there is a need for more user-friendly interfaces and applications to encourage widespread adoption.

In conclusion, Web3 is set to bring about significant changes in the digital landscape, particularly in the areas of decentralized applications and smart contracts. With its potential to disrupt traditional industries and provide a more secure and transparent way of conducting transactions, Web3 has the power to transform the way we interact with the internet. However, for this potential to be fully realized, there is a need for more user-friendly interfaces and widespread adoption. As we continue to explore the potential of Web3, it is clear that we are entering a new era of the internet that has the power to shape our future in ways we never thought possible.

Web3 and the Future of E-commerce: How Blockchain is Changing the Way We Shop Online

In recent years, the term “Web3” has been gaining traction in the tech world. But what exactly does it mean and how is it impacting the future of e-commerce? In this article, we will explore the concept of Web3 and how blockchain technology is revolutionizing the way we shop online.

Web3, also known as the decentralized web, is the next generation of the internet. It is built on the principles of decentralization, transparency, and user control. Unlike the current web, which is controlled by a few centralized entities, Web3 is a peer-to-peer network where users have more control over their data and interactions.

One of the key technologies driving Web3 is blockchain. Blockchain is a distributed ledger technology that allows for secure and transparent transactions without the need for intermediaries. This means that users can transact directly with each other, cutting out the middlemen and reducing costs.

So, how is Web3 changing the e-commerce landscape? Let’s take a closer look.

First and foremost, Web3 is making online shopping more secure. With traditional e-commerce platforms, users have to trust the platform to keep their personal and financial information safe. However, with Web3, transactions are encrypted and stored on a decentralized network, making it nearly impossible for hackers to access sensitive data.

Moreover, Web3 is also making online shopping more transparent. With blockchain technology, every transaction is recorded on a public ledger, allowing for complete transparency and traceability. This means that consumers can verify the authenticity of products and track their supply chain, ensuring ethical and sustainable practices.

Another significant impact of Web3 on e-commerce is the elimination of middlemen. Traditional e-commerce platforms charge fees for transactions and often take a cut of the profits. With Web3, these intermediaries are no longer needed, reducing costs for both buyers and sellers. This also means that small businesses and independent sellers can compete with larger corporations on a more level playing field.

Furthermore, Web3 is enabling new business models in e-commerce. One such model is the concept of decentralized marketplaces. These marketplaces are built on blockchain technology and allow for peer-to-peer transactions without the need for a central authority. This not only reduces costs but also gives more power to the users, as they can set their own prices and terms of sale.

In addition to these changes, Web3 is also making online shopping more personalized. With the use of smart contracts, which are self-executing contracts on the blockchain, e-commerce platforms can gather data on consumer preferences and behavior. This data can then be used to create personalized shopping experiences, making it easier for consumers to find what they are looking for.

Moreover, Web3 is also enabling the use of cryptocurrencies in e-commerce. With the rise of digital currencies like Bitcoin and Ethereum, consumers can now make purchases using these currencies on decentralized marketplaces. This not only provides more options for consumers but also reduces the risk of fraud and chargebacks for merchants.

However, like any new technology, Web3 also has its challenges. One of the main challenges is the lack of user-friendly interfaces. Currently, most Web3 applications require some technical knowledge to use, making it less accessible to the general public. However, as the technology evolves, we can expect to see more user-friendly interfaces that will make Web3 more mainstream.

In conclusion, Web3 and blockchain technology are transforming the e-commerce industry in many ways. From increased security and transparency to new business models and personalized shopping experiences, the potential for Web3 in e-commerce is vast. As the technology continues to evolve, we can expect to see even more innovations and advancements in the way we shop online.

The gaming lesson from Off The Grid and Telegram? Put blockchain in the background

0
The gaming lesson from Off The Grid and Telegram? Put blockchain in the background



The following is a guest post from Leo Li, CVO and Chief Growth Officer at CARV.

Off The Grid could be the mainstream moment we’ve been waiting for in web3 gaming – not because it flaunts blockchain features, but because it doesn’t. The major console release integrates NFTs and blockchain in the background, letting gameplay take center stage. The game is the main appeal, while the blockchain is a bonus that furthers trading, ownership, and expression.

Much like Telegram, which is quietly slipping crypto wallet functionality into hundreds of millions of pockets, game developers realize that simplicity is vital. Hitting gamers over the head with painful onboarding and crypto-heavy concepts can alienate potential players. 

The next gaming bull run will not be driven by blockchain games—it will be driven by great games that happen to use blockchain.

The quiet revolution in blockchain gaming

Off The Grid is causing a lot of noise for a game that’s not officially out yet. The cyberpunk battle royale from Gunzilla Games – only available via early access on PC, PlayStation 5, and Xbox Series X – is getting plenty of attention for its tongue-in-cheek violence and gritty visuals from filmmaker Neill Blomkamp (District 9, Elysium).

“Part schlocky satire of streaming culture, part sendup of gamers, all shrewd self-aware storytelling: Off The Grid is a fun time,” CNET wrote in October.

It’s a fun time, indeed, that’s entirely on the blockchain. Blink, and you might miss it, but Off The Grid is a native web3 title built upon an Avalanche subnet. This enables crypto capabilities alongside gameplay, including a forthcoming token and the NFT minting and trading of in-game weapons and skins on OpenSea. The best part? So far, players aren’t obligated to engage with these features. The blockchain and the game are separate yet complementary, with the former intended to enhance the latter rather than override it.

It’s a similar story over on Telegram. Players are flocking to the simplicity and engagement of clicker games inside the chat platform. Thanks to endorsing The Open Network (TON) as its official web3 infrastructure and integrating wallet functionality, Telegram makes it easy for gamers to embrace crypto without knowing it. Gone are complicated onboarding and clunky interfaces. Telegram’s mini-app gaming occurs automatically inside the known and trusted chat platform.

The successes are impressive across the board. TON clicker Notcoin and its eponymous token rose to a market cap of more than half a billion dollars, and BANANA earned 12 million players just over a month after launch. Meanwhile, Off The Grid is also gaining serious traction, with millions of wallets created during the first week of early access before becoming the top free game on the Epic Games Store. The numbers tell a clear story: mass adoption follows when blockchain integration is invisible.

The lesson for blockchain game makers

Since the sector’s inception, we’ve been waiting for the blockchain gaming “mainstream moment.” The one-two punch of Off The Grid and Telegram suggests we’re inching closer, but their wins hold essential lessons for developers.

First, arming your game with blockchain isn’t enough. About 400 crypto games stopped development last year, and since 2018, more than three-quarters of all blockchain games have failed to gain traction and been discontinued. The reason? Games weren’t fun enough, and blockchain features weren’t compelling enough. Going forward, game makers must start with both the game and the blockchain use case before working backward. Without a good game behind it, the blockchain elements never have a chance to take off.

The second lesson is about accessibility. One-quarter of web3 leaders agree that the learning curve for blockchain technology and the lack of user-friendly interfaces remain significant hurdles to adoption. Solutions like CARV ID are instructive here, uniting gamer profiles under one banner to improve interoperability and showcase achievements across games. This new crop of games shows that we need to start with great gameplay, make it easy to access, and bake blockchain into the foundation. This is the best way to make both sides of “blockchain gaming” work together.

In my view, it’s time to think of blockchain gaming like cloud computing—something consumers seldom encounter, but that drives the backend thanks to invisible integration. Just as we don’t think about cloud computing when using Netflix, players shouldn’t have to think about blockchain when inside our games.

Balancing gameplay and blockchain in 2025

Heading into the new year, blockchain gaming is better positioned to break into the mainstream. AAA studios are taking us seriously, and gamers are coming around the idea of blockchain features that further the play experience.

However, there’s a fine line between creating a great game and ensuring meaningful crypto synergy. Neopets Metaverse, an NFT-powered game based on the massively popular 1999 pet simulator, was in development for roughly two years before it was abruptly canceled. The reason? The CEO said gamers “didn’t care“ about blockchain, with the title intended to repurpose it as a regular mobile game. Clearly, the game needs to be good, and adding blockchain needs to make sense, add value, and improve the ecosystem.

Here, understanding the audience is crucial. Gamers trend younger and want better ways to express themselves, own their identities, and make money online – all things blockchain can enable. Therefore, making these features accessible and meaningful with blockchain will drive the next web3 gaming bull run.

This is the challenge for our sector in 2025 – making blockchain relevant to the game, invisible to the player, and powerful enough to drive adoption. It’s now up to us – the developers – to get there. Off The Grid and Telegram are showing us the way forward – put blockchain in the background and let the games lead the charge.

Mentioned in this article



Source link

AI Crypto Startup O.XYZ Faces Allegations of Misrepresentation and Internal Turmoil: Sources – Decrypt

0
AI Crypto Startup O.XYZ Faces Allegations of Misrepresentation and Internal Turmoil: Sources – Decrypt



O.XYZ, a blockchain and AI company touting crypto and artificial intelligence services, is facing allegations of falsely inflating its technological claims and engaging in aggressive tactics to suppress dissent within the company.

While founder Ahmad Shadid has defended both his and the company’s actions, multiple sources familiar with the company’s operations who spoke with Decrypt have refuted public claims, alleging widespread misrepresentation of O.XYZ’s capabilities.

O.XYZ positions itself as a community-owned “Super AI” ecosystem. The company claims to leverage substantial GPU computing power, purportedly deploying tens of thousands of open-source models, enabling it to execute a wide array of tasks.

Sources claim the company has exaggerated its capabilities, falsely stating it can connect to over 100,000 AI models, runs 20 times faster than competitors, and owns powerful hardware it doesn’t actually possess. 

It’s also accused of inflating the value of its satellite program and misrepresenting its token launch, raising questions about transparency and accountability.

As a result of those allegations, sources claim holders of the company’s recently launched O.XYZ token are at risk of being harmed.

In an emailed statement to Decrypt, Shadid issued a detailed response to concerns raised about the company’s claims, insisting that O.XYZ’s promotional language is “forward-looking” and aligned with its development roadmap. 

However, sources who spoke with Decrypt dispute this characterization, pointing to materials on O.XYZ’s website and investor presentations that describe capabilities as existing rather than aspirational.

In June, Shadid stepped down as CEO of Solana-based decentralized infrastructure provider IO.net—a company he founded—amid allegations surrounding his past and misreported company metrics, citing his decision as a move to reduce distractions and focus on the company’s growth.

A public statement Shadid published amid his departure from IO has since been deleted from Twitter (aka X). To avoid conflicts and distance itself from Shadid, IO agreed to offer a “six-figure severance,” one source familiar with the matter told Decrypt. IO earlier this year raised $30 million in a Series A round from notable crypto industry investors, including Hack VC, Solana Labs, Aptos Labs, Multicoin Capital, and Animoca Brands. 

Several sources who have previously worked with Shadid described him as a “smart, capable individual” who manages each and every time to assemble a highly experienced team for the job. However, both a former employee and an investor who wished not to be named stated they would “never work with Shadid again.”

Disputed infrastructure and performance claims

In response to allegations that O.XYZ is exaggerating its capabilities, Shadid highlighted the company’s investments in U.S.-based Cerebras Systems hardware and plans to deploy cutting-edge AI data centers, asserting that its infrastructure supports “20x faster” AI processing. He cited benchmarks of Cerebras WSE-3 chips as evidence of O.XYZ’s performance leap.

Sources dismissed those claims as “patently false,” instead alleging O.XYZ has yet to acquire the necessary hardware for such operations, despite Shadid’s claims of “advanced talks” with Cerebras.

“There’s no internal benchmarking supporting the 20x figure,” one source said, who noted that the company’s routing technology might actually increase latency rather than reduce it.

Misleading Starlink and partnership claims

O.XYZ has also promoted itself as being powered by SpaceX’s Starlink, with Shadid emphasizing the technology’s integration within the company’s operations. 

He further clarified that the claim refers to O.XYZ’s ongoing infrastructure roadmap, including plans for “maritime connectivity solutions” and future AI capabilities in space slated for 2026.

However, sources strongly contest that narrative. Instead, they assert Starlink is only used for basic internet connectivity in remote areas and plays no role in AI processing. 

“No satellite designs exist within the company, and there’s no engineering team capable of developing such capabilities,” one source told Decrypt. They added that there are no ongoing discussions with SpaceX, despite the impression created in marketing materials.

Shadid’s responses also addressed the display of logos from major organizations such as OpenAI and Neuralink, claiming they were used to represent contributors’ backgrounds rather than formal partnerships. 

However, sources allege that this practice misleads investors and customers, noting that contributors requested their logos be removed after leaving the company—a request that allegedly has yet to be resolved.

Controversy around token launch

The company’s O.XYZ token launch on October 15 across multiple “lesser-known” exchanges has been another flashpoint. While the token only averages around $23,000 in daily trading volume across all exchanges—with a mere $8.1 million fully diluted token supply valuation—sources say it’s only a matter of time before token holders are harmed.

“There is no way to use the token to pay for anything like API calls for the company AI, nor does the token legally entitle the holder to any assets of the company,” one of the sources said.

Shadid characterized the “initial liquidity pool activation” as occurring during a “testing phase,” which was “immediately communicated to the community.”

“After a thorough market condition analysis, we made a strategic decision to proceed with the launch rather than withdraw the liquidity, effectively advancing our planned token release timeline,” Shadid said.

He added: “This decision was communicated transparently through multiple channels, including Discord and internal communications,” he said. “While the initial activation was unplanned, our subsequent decision to maintain the token’s availability was deliberate and strategic. We maintain comprehensive documentation of all communications throughout this process, demonstrating our commitment to transparency with both our community and stakeholders.”

One former employee who did not wish to be named, for fear of reprisal, shared that they were offered financial incentives tied to a non-disclosure agreement after questioning the ethical implications of the launch. 

Another source alleged, “Shadid was testing trading algorithms when the ‘accident’ occurred.”

“Was testing my O.CAPITAL market maker quant systems, and it created a pool on Uniswap, and tokens went live by mistake,” according to a screenshot reviewed by Decrypt of a message from Shadid posted to a general Slack channel for all employees to see. “I can’t take it down.”

Secret recordings also reviewed by Decrypt appear to contradict Shadid’s explanation. Sources say the token launch was instead deliberate, and employees were told differing stories—some that it was intentional, others that it was a “mistake.”

“Totally against what the public-facing company docs would have people believe with lines of transparency and community ownership,” one source said. “Ahmad owns all the tokens effectively and can dump them at a whim.”

Allegations of retaliatory practices

Sources claim that O.XYZ has used non-disclosure agreements to suppress dissent. They described a culture of retaliation, including terminations following inquiries into the company’s operations. 

“The NDAs are being weaponized to silence legitimate concerns,” one source alleged.

Shadid defended the company’s contractor-based employment model and strict confidentiality agreements, stating these practices are standard in the industry. 

Shadid has not directly addressed the allegations of retaliation, but emphasized O.XYZ’s commitment to “clear, accurate communication” and “comprehensive documentation” of its strategic goals.

In any case, the allegations have led several former employees and contributors to seek legal counsel. Sources Decrypt spoke to say those former employees are now exploring further options to shed light on O.XYZ’s alleged practices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Peace of mind, one pill at a time: Pillgram’s intelligent dispenser simplifies pill management at home | Web3Wire

0
Peace of mind, one pill at a time: Pillgram’s intelligent dispenser simplifies pill management at home | Web3Wire


Peace of mind, one pill at a time: Pillgram’s intelligent dispenser simplifies pill management at home | Web3Wire

Image: https://www.getnews.info/wp-content/uploads/2024/11/1732307630.jpg

Pillgram, an innovative medication management system designed to simplify the process of taking medications, is set to launch its Kickstarter campaign. Pillgram combines AI-powered technology, real-time monitoring, and caregiver notifications, providing a seamless solution for those managing medication for chronic conditions such as providing a seamless solution for those managing numerous or complicated medication regimes. The campaign aims to fund the final stages of development, offering early supporters exclusive discounts and other benefits.

Pillgram is not just another pill dispenser; it is a comprehensive, AI-assisted medication management solution that integrates with a user-friendly app to provide patients and caregivers peace of mind. The system ensures doses are never missed, tracking and verifying every intake with visual guidance and timed alerts.

Image: https://www.getnews.info/uploads/a67e04554673ad4b6f08c6d7d0bef768.jpg

“Managing medication should be stress-free and reliable. Pillgram offers an easy-to-use, intelligent solution that supports patients and caregivers at every step of the way,” said Jacques Amar, CEO of Pillgram. “Our Kickstarter campaign invites early backers to be part of this healthcare innovation and benefit from exclusive rewards.”

Pillgram’s key Features Include:

*Timed Alerts & Visual Guidance: The device and app work together to send timely reminders, displaying the correct pill with an illuminated compartment to eliminate confusion.

*Pill Verification & Real-Time Monitoring: Patients confirm their intake through the app or device, allowing caregivers to receive immediate alerts if a dose is missed, ensuring swift action can be taken when necessary.

*Personalized Care & FDA-Integrated Database: Pillgram customizes the medication schedule based on individual needs, providing accurate information and reducing errors through its integration with the FDA’s medication database.

Image: https://www.getnews.info/uploads/fba2673fea68ea8ca05f0eab1c894db6.jpg

Pillgram’s Kickstarter Campaign and Benefits for Backers

The Kickstarter campaign offers early backers a unique opportunity to support the future of medication management while gaining early access to features and discounted pricing. The funds raised will be directed towards the final phase of development and production, with an estimated delivery timeline communicated transparently to backers.

For more information, or to support their campaign, visit their Kickstarter page at https://www.kickstarter.com/projects/pillgram/pillgram-your-pill-management-simplified [https://www.kickstarter.com/projects/pillgram/pillgram-your-pill-management-simplified?ref=a801f8]Media ContactCompany Name: PillgramCity: Los AngelesState: CaliforniaCountry: United StatesWebsite: http://www.kickstarter.com/projects/pillgram/pillgram-your-pill-management-simplified?ref=a801f8

This release was published on openPR.

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.



Source link

GMK Metaverse: Dominating the Digital Landscape with Unstoppable Power – Web3oclock

0
GMK Metaverse: Dominating the Digital Landscape with Unstoppable Power – Web3oclock


Key Features of GMK Metaverse

What Sets GMK Metaverse Apart?

Challenges for GMK Metaverse

Future Prospects of GMK Metaverse

What is GMK Metaverse?

Key Features of GMK Metaverse:

1. Immersive Virtual Real Estate:

2. Decentralized NFT Marketplace:

Transparency and Security:

3. Cross-Reality Integration (VR + AR):

Virtual Reality (VR)

Fully Immersive Experiences: 

Enhanced Interaction: 

Augmented Reality (AR):

Blending Physical and Digital Worlds: 

On-the-Go Accessibility: 

Unified Ecosystem:

Inclusive Design:

4. Social and Professional Spaces:

5. Interactive Gaming Ecosystem:

6. Blockchain-Powered Infrastructure

GMK Coin:

7. User-Centric Customization:

User-Generated Content (UGC):

What Sets GMK Metaverse Apart?

Future Prospects of GMK Metaverse:



Source link

Aethir Tokenomics – Case Study – Nextrope – Your Trusted Partner for Blockchain Development and Advisory Services

0
Aethir Tokenomics – Case Study – Nextrope – Your Trusted Partner for Blockchain Development and Advisory Services


The 33rd Economic Forum 2024, held in Karpacz, Poland, gathered leaders from across the globe to discuss the pressing economic and technological challenges. This year, the forum had a special focus on Artificial Intelligence (AI and Cybersecurity, bringing together leading experts and policymakers.

Nextrope was proud to participate in the Forum where we showcased our expertise and networked with leading minds in the AI and blockchain fields.

Economic Forum 2024: A Hub for Innovation and Collaboration

The Economic Forum in Karpacz is an annual event often referred to as the “Polish Davos,” attracting over 6,000 participants, including heads of state, business leaders, academics, and experts. This year’s edition was held from September 3rd to 5th, 2024.

Key Highlights of the AI Forum and Cybersecurity Forum

The AI Forum and the VI Cybersecurity Forum were integral parts of the event, organized in collaboration with the Ministry of Digital Affairs and leading Polish universities, including:

Cracow University of Technology

University of Warsaw

Wrocław University of Technology

AGH University of Science and Technology

Poznań University of Technology

Objectives of the AI Forum

Promoting Education and Innovation: The forum aimed to foster education and spread knowledge about AI and solutions to enhance digital transformation in Poland and CEE..

Strengthening Digital Administration: The event supported the Ministry of Digital Affairs’ mission to build and strengthen the digital administration of the Polish State, encouraging interdisciplinary dialogue on decentralized architecture.

High-Level Meetings: The forum featured closed meetings of digital ministers from across Europe, including a confirmed appearance by Volker Wissing, the German Minister for Digital Affairs.

Nextrope’s Active Participation in the AI Forum

Nextrope’s presence at the AI Forum was marked by our active engagement in various activities in the Cracow University of Technology and University of Warsaw zone. One of the discussion panels we enjoyed the most was “AI in education – threats and opportunities”.

Our Key Activities

Networking with Leading AI and Cryptography Researchers.

Nextrope presented its contributions in the field of behavioral profilling in DeFi and established relationships with Cryptography Researchers from Cracow University of Technology and the brightest minds on Polish AI scene, coming from institutions such as Wroclaw University of Technology, but also from startups.

Panel Discussions and Workshops

Our team participated in several panel discussions, covering a variety of topics. Here are some of them

Polish Startup Scene.

State in the Blockchain Network

Artificial Intelligence – Threat or Opportunity for Healthcare?

Silicon Valley in Poland – Is it Possible?

Quantum Computing – How Is It Changing Our Lives?

Broadening Horizons

Besides tuning in to topics that strictly overlap with our professional expertise we decided to broaden our horizons and participated in panels about national security and cross-border cooperation.

Meeting with clients:

We had a pleasure to deepen relationships with our institutional clients and discuss plans for the future.

Networking with Experts in AI and Blockchain

A major highlight of the Economic Forum in Karpacz was the opportunity to network with experts from academia, industry, and government.

Collaborations with Academia:

We engaged with scholars from leading universities such as the Cracow University of Technology and the University of Warsaw. These interactions laid the groundwork for potential research collaborations and joint projects.

Building Strategic Partnerships:

Our team connected with industry leaders, exploring opportunities for partnerships in regard to building the future of education. We met many extremely smart, yet humble people interested in joining advisory board of one of our projects – HackZ.

Exchanging Knowledge with VCs and Policymakers:

We had fruitful discussions with policymakers and very knowledgable representatives of Venture Capital. The discussions revolved around blockchain and AI regulation, futuristic education methods and dillemas regarding digital transformation in companies. These exchanges provided us with very interesting insights as well as new friendships.

Looking Ahead: Nextrope’s Future in AI and Blockchain

Nextrope’s participation in the Economic Forum Karpacz 2024 has solidified our position as one of the leading, deep-tech software houses in CEE. By fostering connections with academia, industry experts, and policymakers, we are well-positioned to consult our clients on trends and regulatory needs as well as implementing cutting edge DeFi software.

What’s Next for Nextrope?

Continuing Innovation:

We remain committed to developing cutting-edge software solutions and designing token economies that leverage the power of incentives and advanced cryptography.

Deepening Academic Collaborations:

The partnerships formed at the forum will help us stay at the forefront of technological advancements, particularly in AI and blockchain.

Expanding Our Global Reach:

The international connections made at the forum enable us to expand our influence both in CEE and outside of Europe. This reinforces Nextrope’s status as a global leader in technology innovation.

If you’re looking to create a robust blockchain system and go through institutional-grade testing please reach out to contact@nextrope.com. Our team is ready to help you with the token engineering process and ensure your project’s resilience in the long term.



Source link

Enhancing RAG Context Recall with a Custom Embedding Model: Guide

0
Enhancing RAG Context Recall with a Custom Embedding Model: Guide


Retrieval-augmented generation (RAG) has become a go-to approach for integrating large language models (LLMs) into specialized business applications, allowing proprietary data to be directly infused into the model’s responses. However, as powerful as RAG is during the proof of concept (POC) phase, developers frequently encounter significant accuracy drops when deploying it into production. This issue is especially noticeable during the retrieval phase, where the goal is to accurately retrieve the most relevant context for a given query—a metric often referred to as context recall.

This guide focuses on how to improve context recall by customizing and fine-tuning an embedding model. We’ll explore embedding models, how to prepare a dataset tailored to your needs, and specific steps for training and evaluating your model, all of which can significantly enhance RAG’s performance in production. Here’s how to refine your embedding model and boost your RAG context recall by over 95%.

What is RAG and Why Does it Struggle in Production?

RAG consists of two primary steps: retrieval and generation. During retrieval, the model fetches the most relevant context by converting the text into vectors, indexing, retrieving, and re-ranking these vectors to select the top matches. In the generation stage, this retrieved-context is combined with prompts, which are then sent to the LLM to generate responses. Unfortunately, the retrieval phase often fails to retrieve all relevant contexts, causing drops in context recall and leading to less accurate generation outputs.

One solution is adapting the embedding model—a neural network designed to understand the relationships between text data—so it produces embeddings that are highly specific to your dataset. This fine-tuning enables the model to create similar vectors for similar sentences, allowing it to retrieve contexts that are more relevant to the query.

Understanding Embedding Models

Embedding models extend beyond simple word vectors, offering sentence-level semantic understanding. For instance, embedding models trained with techniques such as masked language modeling learn to predict masked words within a sentence, giving them a deep understanding of language structure and context. These embeddings are often optimized using distance metrics like cosine similarity to prioritize and rank the most relevant contexts during retrieval.

For example, an embedding model might generate similar vectors for these sentences:

Even though they describe different things, they both relate to the theme of color and nature, so they are likely to have a high similarity score.

For RAG, high similarity between a query and relevant context ensures accurate retrieval. Let’s examine a practical case where we aim to improve this similarity for better results.

Customizing the Embedding Model for Enhanced Context Recall

To significantly improve context recall, we adapt the embedding model to our specific dataset, making it better suited to retrieve relevant contexts for any given query. Rather than training a new model from scratch, which is resource-intensive, we fine-tune an existing model on our proprietary data.

Why Not Train from Scratch?

Starting from scratch isn’t necessary because most embedding models are pre-trained on billions of tokens and have already learned a substantial amount about language structures. Fine-tuning such a model to make it domain-specific is far more efficient and ensures quicker, more accurate results.

Step 1: Preparing the Dataset

A customized embedding model requires a dataset that closely mirrors the kind of queries it will encounter in real use. Here’s a step-by-step breakdown:

Training Set Preparation

Mine Questions: Extract a wide range of questions related to your knowledge base using the LLM. If your knowledge base is extensive, consider chunking it and generating questions for each chunk.

Paraphrase for Variability: Paraphrase each question to expand your training dataset, helping the model generalize better across similar queries.

Organize by Relevance: Assign each question a corresponding context that directly addresses it. The aim is to ensure that during training, the model learns to associate specific queries with the most relevant information.

Testing Set Preparation

Sample and Refine: Create a smaller test set by sampling real user queries or questions that may come up in practice. This testing set helps ensure that your model performs well on unseen data.

Include Paraphrased Variations: Add slight paraphrases of the test questions to help the model handle different phrasings of similar queries.

For this example, we’ll use the “PubMedQA” dataset from Hugging Face, which contains unique publication IDs (pubid), questions, and contexts. Here’s a sample code snippet for loading and structuring this dataset:

from datasets import load_dataset
med_data = load_dataset(“qiaojin/PubMedQA”, “pqa_artificial”, split=“train”)

med_data = med_data.remove_columns([‘long_answer’, ‘final_decision’])
df = pd.DataFrame(med_data)
df[‘contexts’] = df[‘context’].apply(lambda x: x[‘contexts’])
expanded_df = df.explode(‘contexts’)
expanded_df.reset_index(drop=True, inplace=True)
splitted_dataset = Dataset.from_pandas(expanded_df[[‘question’, ‘contexts’]])

Step 2: Constructing the Evaluation Dataset

To assess the model’s performance during fine-tuning, we prepare an evaluation dataset. This dataset is derived from the training set but serves as a realistic representation of how well the model might perform in a live setting.

Generating Evaluation Data

From the PubMedQA dataset, select a sample of contexts, then use the LLM to generate realistic questions based on this context. For example, given a context on immune cell response in breast cancer, the LLM might generate questions like “How does immune cell profile affect breast cancer treatment outcomes?”

Each row of your evaluation dataset will thus include several context-question pairs that the model can use to assess its retrieval accuracy.

from openai import OpenAI

client = OpenAI(api_key=“”)

prompt = “””Your task is to mine questions from the given context.
{context} {example_question} “””

questions = []
for row in eval_med_data_seed:
context = “\n\n”.join(row[“context”][“contexts”])
completion = client.chat.completions.create(
model=“gpt-4o”,
messages=[
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: prompt.format(context=context, example_question=row[“question”])}
]
)
questions.append(completion.choices[0].message.content.split(“|”))

Step 3: Setting Up the Information Retrieval Evaluator

To gauge model accuracy in the retrieval phase, use an Information Retrieval Evaluator. The evaluator retrieves and ranks contexts based on similarity scores and assesses them using metrics like Recall@k, Precision@k, Mean Reciprocal Rank (MRR), and Accuracy@k.

Define Corpus and Queries: Organize the corpus (context information) and queries (questions from your evaluation set) into dictionaries.

Set Relevance: Establish relevance by linking each query ID with a set of relevant context IDs, which represents the contexts that ideally should be retrieved.

Evaluate: The evaluator calculates metrics by comparing retrieved contexts against relevant ones. Recall@k is a critical metric here, as it indicates how well the retriever pulls relevant contexts from the database.

from sentence_transformers import InformationRetrievalEvaluator

ir_evaluator = InformationRetrievalEvaluator(
queries=eval_queries,
corpus=eval_corpus,
relevant_docs=eval_relevant_docs,
name=“med-eval-test”,
)

Step 4: Training the Model

Now we’re ready to train our customized embedding model. Using the sentence-transformer library, we’ll configure the training parameters and utilize the MultipleNegativeRankingLoss function to optimize similarity scores between queries and positive contexts.

Training Configuration

Set the following training configurations:

Training Epochs: Number of training cycles.

Batch Size: Number of samples per training batch.

Evaluation Steps: Frequency of evaluation checkpoints.

Save Steps and Limits: Frequency and total limit for saving the model.

from sentence_transformers import SentenceTransformer, losses

model = SentenceTransformer(“stsb-distilbert-base”)
train_loss = losses.MultipleNegativesRankingLoss(model=model)

trainer = SentenceTransformerTrainer(
model=model, args=args,
train_dataset=splitted_dataset[“train”],
eval_dataset=splitted_dataset[“test”],
loss=train_loss,
evaluator=ir_evaluator
)

trainer.train()

Results and Improvements

After training, the fine-tuned model should display significant improvements, particularly in context recall. In testing, fine-tuning showed an increase in:

Recall@1: 78.8%

Recall@3: 137.9%

Recall@5: 116.4%

Recall@10: 95.1%

Such improvements mean that the retriever can pull more relevant contexts, leading to a substantial boost in RAG accuracy overall.

Final Notes: Monitoring and Retraining

Once deployed, monitor the model for data drift and periodically retrain as new data is added to the knowledge base. Regularly assessing context recall ensures that your embedding model continues to retrieve the most relevant information, maintaining RAG’s accuracy and reliability in real-world applications. By following these steps, you can achieve high RAG accuracy, making your

model robust and production-ready.

FAQs

What is RAG in machine learning?RAG, or retrieval-augmented generation, is a method that retrieves specific information to answer queries, improving the accuracy of LLM outputs.

Why does RAG fail in production?RAG often struggles in production because the retrieval step may miss critical context, resulting in poor generation accuracy.

How can embedding models improve RAG performance?Fine-tuning embedding models to a specific dataset enhances retrieval accuracy, improving the relevance of retrieved contexts.

What dataset structure is ideal for training embedding models?A dataset with varied queries and relevant contexts that resemble real queries enhances model performance.

How frequently should embedding models be retrained?Embedding models should be retrained as new data becomes available or when significant accuracy dips are observed.



Source link

Apple Admits to Security Vulnerability That Leaves Crypto Users Exposed—Here’s What You Should Do – Decrypt

0
Apple Admits to Security Vulnerability That Leaves Crypto Users Exposed—Here’s What You Should Do – Decrypt



Apple confirmed Monday its devices were left vulnerable to an exploit that allowed for remote malicious code execution through web-based JavaScript, opening up an attack vector that could have part unsuspecting victims from their crypto.

According to a recent Apple security disclosure, users must use the latest versions of its JavaScriptCore and WebKit software to patch the vulnerability. 

The bug, discovered by researchers at Google’s threat analysis group, allows for “processing maliciously crafted web content,” which could lead to a “cross-site scripting attack.”

More alarmingly, Apple also admitted it “is aware of a report that this issue may have been actively exploited on Intel-based Mac systems.”

Apple also issued a similar security disclosure for iPhone and iPad users. Here, it says, the JavaScriptCore vulnerability allowed for “processing maliciously crafted web content may lead to arbitrary code execution.” 

In other words, Apple became aware of a security flaw that could let hackers take control of a user’s iPhone or iPad if they visit a harmful website. An update should solve the issue, Apple said.

Jeremiah O’Connor, CTO and co-founder of crypto cybersecurity firm Trugard, told Decrypt that “attackers could access sensitive data like private keys or passwords” stored in their browser, enabling crypto theft if the user’s device remained unpatched.

Revelations of the vulnerability within the crypto community began circulating on social media on Wednesday, with former Binance CEO Changpeng Zhao raising the alarm in a tweet advising that users of Macbooks with Intel CPUs should update as soon as possible.

The development follows March reports that security researchers have discovered a vulnerability in Apple’s previous generation chips—its M1, M2, and M3 series that could let hackers steal cryptographic keys.

The exploit, which isn’t new, leverages “prefetching,” a process used by Apple’s own M-series chips to speed up interactions with the company’s devices. Prefetching can be exploited to store sensible data in the processor’s cache and then access it to reconstruct a cryptographic key that is supposed to be inaccessible.

Unfortunately, ArsTechnica reports that this is a significant issue for Apple users since a chip-level vulnerability can not be solved through a software update. 

A potential workaround can alleviate the problem, but those trade performance for security.

Edited by Stacy Elliott and Sebastian Sinclair

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

Popular Posts

My Favorites

Janelle Opens a Sister Wives Closet That Fans Actually Want

0
Sister Wives fans look to Janelle Brown today after they gave up on heading back to My Sister Wife’s Closet just to read...