Metaverse

Home Metaverse Page 247

Microsoft’s Vision for Copilot: A New Era of AI Companionship

Microsoft’s Vision for Copilot: A New Era of AI Companionship


Have you ever thought how and when Microsoft Copilot will be a true assistant and provide you a new kind of support? With the latest updates to Microsoft Copilot this vision is getting closer to reality. Announced on October 1, 2024, the refreshed Copilot vision aims to revolutionize our interaction with technology by focusing on how it feels to users, rather than just the technical details.

Microsoft Copilot: Your AI CompanionNew Upcoming Features to CopilotCopilot Vision and Think DeeperRegional AvailabilityNew Enhancements in Azure OpenAI ServicesGPT-4o-Realtime-Preview with Audio and Speech CapabilitiesPerformance That SpeaksApplications of GPT-4o-Realtime-PreviewWhat’s Next with GPT-4o-Realtime API for Audio?A Commitment to Responsible AI

Microsoft’s Copilot is designed to be a calm, helpful, and supportive presence in your life. It goes beyond merely solving problems; it’s there to support, teach, and help you. Copilot will eventually adapt to your preferences and needs, providing support and helping you navigate life’s complexities. And no, it is not sci-fi-AI-in-making, but just the next step on the road making Copilot more and more useful to us humans. One of the keys to these new features is multi-modality, that is becoming also available via Azure OpenAI Services.  

In the future Copilot will be our UI to AI. As voice and natural language UI becomes common, we will have less need to build complex UIs so enable interactions with backend and other systems. Instead of using a traditional UI, we will be just talking or typing to the Copilot, and we will get the results. Perhaps we need to get the data analyzed? Instead of building a PowerBI report, in the future, we ask Copilot to do that. Does that sound like it would be too far in the future? Did you notice that Excel got Python support? You can use Copilot in Excel today to analyze your data, and it generates and runs Python code that is connected to the data. Why would we not be able to do that on BizChat (in the near future, I hope)? The talking to AI might also sound a bit futuristic, but with latest upcoming features to Copilot – it will be there soon. Not in Europe, but in a few other regions first. But it won’t be just a text to speech, but a voice of Copilot that can mimic and understand feelings in the voice.

Why is analyzing data a great example of this? We have various needs and some of those are ad hoc, despite being somewhat complex. And we may not need the results as a report, but instead we need to know or see what it is all about. And often the data is in backend systems, which brings me to connecting Copilot to systems beyond Microsoft 365. We can already start to pilot with extensions and plugins that extend Copilot’s capabilities. Instead of doing a full analysis, we just might want to know the total of sales for the current day or week. Information that can be fetched from the backend, something we could just ask from our Copilot. What’s already in the works is how we can do actions with external systems. Instead of opening a web page or app and logging into a system, we do all this via our digital assistant. This is why this is extremely interesting and important to keep in mind.

This doesn’t happen tomorrow, but as time goes on – it is happening sooner than we think. We can already extend Copilot and build plugins & custom copilot agents using various ways – such as Copilot Studio, Power Automate and pro-code with Teams Toolkit and Teams AI Studio. I would recommend starting to experiment with these as soon as possible to make the organization future proof.

My thoughts and visions align with Microsoft’s Copilot vision, and so it is easy to be very excited about the opportunities and possibilities that are ahead of us on this journey. I was recently taking part in a great meeting with fellow The Digital Neighborhood MVPs at our HQ in Amsterdam. Ideas and thoughts about the future were discussed from various perspectives, and it was one of my colleague-MVPs who brought up the data analysis example, pointing out how code interpretation will be a real game-changer there. It is already there, on various implementation levels. We have also seen how GPT-4o with voice works – if you haven’t seen those videos, do ask Copilot about them (or just search with Bing or Google). The future is interesting, for sure!

New Upcoming Features to Copilot

The latest updates to Copilot will include several new and enhanced features:

Copilot Voice: This feature allows you to connect with your AI companion using voice commands (multi-modality). With four voice options to choose from, it’s the most intuitive way to brainstorm, ask questions, or simply vent. Copilot doesn’t have feelings, so it is a perfect companion for venting things out – a safe place to do that. Don’t confuse Copilot’s capability to mimic feelings in the voice, to actual feelings and emotions. Copilot is a tool and algorithm in the core, and not a AGI (Artificial General Intelligence).

Copilot Daily: Start your morning with a summary of news and weather, all read in your favorite Copilot Voice. This feature helps you manage the daily barrage of information with ease. It is quite cool to see this happening, as it has been present in so many sci-fi-movies and also on future visions.

Copilot in Microsoft Edge: Copilot is now integrated into the Microsoft Edge browser, quickly helping answer questions, summarize page content, translate text, or rewrite sentences. The cool? The multimodality, as Copilot will also understand images on web pages.

Copilot Labs: This platform allows users to test experimental features like Copilot Vision and Think Deeper, providing feedback to shape future updates.

Copilot Vision and Think Deeper

Copilot Vision: This innovative feature enables Copilot to see what you see and interact with web pages in real time, offering suggestions and answering questions without disrupting your workflow.

For Microsoft, safety and security are their top priorities:

Copilot Vision sessions are entirely opt-in and ephemeral. None of the content Copilot Vision engages with is stored or used for training — the moment you end your session, data is permanently discarded.

The experience won’t work on all websites because we’ve taken important steps to put boundaries on the types of websites Copilot Vision can engage. We’re starting with a limited list of popular websites to help ensure it’s a safe experience for everyone.

Copilot Vision won’t work on paywalled and sensitive content for this preview. We’ve created it with both users’ and creators’ interests top of mind.

There is no specific processing of the content of a website you are browsing, nor any AI training. Copilot Vision simply reads and interprets the images and text it sees on the page for the first time along with you.

Before we launch broadly, we’ll continue to take feedback on all the above from early users in Copilot Labs, refine our safety measures and keep privacy and responsibility at the center of everything we do. Let us know what you think!

Think Deeper: Designed to reason through complex questions, this feature provides detailed, step-by-step answers for challenging queries, helping you make informed decisions. This is an early Copilot Skill that’s still undergoing development, so Microsoft placed it in experimental Copilot Labs to test and get feedback.

As exciting as these features are, it’s important to note their regional rollout plans.

Copilot Voice is initially available in English in Australia, Canada, New Zealand, the United Kingdom, and the United States. Expansion to more regions and languages will follow soon.

Copilot Daily is rolling out first in the United States and the United Kingdom, with additional countries to be added shortly.

Copilot Vision will be accessible through Copilot Labs to a limited number of Copilot Pro subscribers in the United States.

Think Deeper starts its rollout this week to a limited number of Copilot Pro users in Australia, Canada, New Zealand, the United Kingdom, and the United States.

Unfortunately, for those of us in Europe, we will need to wait a bit longer for these exciting new features. Microsoft is working diligently to ensure that personalization in Copilot adheres to the Microsoft Privacy Statement, and options for offering personalization to users in the European Economic Area and the United Kingdom are still being finalized.

Read more about these updates and Microsoft’s Copilot vision from their blog post.

As Copilot is using Azure OpenAI Services (AOAI) in the background (users don’t see these, they just use Copilot) the advancements in AOAI make it possible to bring those features to Copilot. Microsoft just announced several updates to Azure OpenAI Services, Below, read about the latest advancements and the potential opportunity.

GPT-4o-Realtime-Preview with Audio and Speech Capabilities

The introduction of GPT-4o-Realtime-Preview marks a significant milestone: advanced voice capabilities to the Microsoft Azure OpenAI Service, expanding GPT-4o’s multimodal offerings. The integration of language generation with voice interaction allows developers to craft more natural and conversational AI experiences. From creating virtual assistants to powering real-time customer support, the possibilities are vast and promising. And the abovementioned Copilot Voice is a good example of how to utilize this capability.

The GPT-4o-Realtime API supports audio input and output, enabling real-time, natural voice-based interactions. This multimodal capability empowers developers to build innovative voice applications with ease, providing faster and more engaging responses that minimize the robotic tone often associated with AI-generated speech. Moreover, the API supports a wide range of languages, facilitating natural, multilingual conversations for global-facing applications.

This also means that it won’t be necessary to use Azure Speech to Text (STT) and Text to Speech  (TTS) services to create a voice interface to your AI. Adding the voice will be way easier now – but it doesn’t mean we would not need STT and TTS services anymore. With those Speech services we can utilize custom voice and photorealistic avatars – and a lot more. But for the Copilot and AI apps – having these built-in inside GPT-4o will be a big advantage on both speed and easiness. We won’t be able to notice the ”AI delay” we experience when doing the typical speech to text – to LLM and back – and text to speech roundtrip.  

This will be available for standard and global standard deployment in East US2 and Sweden Central for approved customers. Regional availability ensures that users across different geographical locations can access and benefit from the advanced capabilities of GPT-4o-Realtime API for Audio.

Performance That Speaks

Early adopters of the GPT-4o-Realtime API for Audio have reported remarkable results, including significantly faster responses and more natural conversations. These improvements are particularly beneficial for applications such as voice-based chatbots, virtual assistants, and real-time translators, enhancing user engagement and satisfaction.

Applications of GPT-4o-Realtime-Preview

The versatility of GPT-4o-Realtime-Preview spans across various industries, transforming how businesses operate and how users interact with technology:

Customer Service: Voice-based chatbots and virtual assistants can handle customer inquiries more naturally and efficiently, reducing wait times and improving overall satisfaction.

Content Creation: Media producers can revolutionize their workflows by leveraging speech generation for use in video games, podcasts, and film studios.

Real-Time Translation: Industries such as healthcare and legal services can benefit from real-time audio translation, breaking down language barriers and fostering better communication in critical contexts.

Azure remains steadfast in its commitment to responsible AI, with safety and privacy as default priorities. The Realtime API utilizes multiple layers of safety measures, including automated monitoring and human review, to prevent misuse. Additionally, the Realtime API has undergone rigorous evaluations guided by our commitments to Responsible AI, ensuring a secure and responsible AI experience for our users.

What’s Next with GPT-4o-Realtime API for Audio?

Microsoft will continue to innovate and expand the capabilities of the GPT-4o-Realtime API for Audio, and they are excited to see how we, partners, developers and businesses will leverage these new technologies to create voice-driven applications. Preferably ones that push the boundaries of what’s possible. Starting today, you can explore these new capabilities in the Azure OpenAI Studio, experiment with them in the Early Access Playground, or integrate the real-time API in public preview into your applications. Be sure to review our documentation for the latest updates, dive into the available use cases, and start building with GPT-4o-Realtime API for Audio to bring your business to the next level of AI innovation.

Read more about these updates to Azure OpenAI Service from here and here and here.

Microsoft is committed to ensuring that AI enriches people’s lives and strengthens our bonds with others, while supporting our unique and complex humanity. Copilot is not just another tool; it’s a companion designed to be by your side, always supporting you in ways that matter most.

As we embark on this exciting journey, Microsoft remains dedicated to accountability, respect, and compassion for users and society. This is a journey we promise to take together, and we couldn’t be more thrilled to start it with you.

Stay tuned for more updates and get ready to experience a new era of AI companionship with Copilot.

Published by Vesa Nopanen

Vesa “Vesku” Nopanen, Principal Consultant and Microsoft MVP (M365 and AI Platform) working on Future Work at Sulava.

I work, blog and speak about Future Work : AI, Microsoft 365, Copilot, Microsoft Mesh, Metaverse, and other services & platforms in the cloud connecting digital and physical and people together.

I have about 30 years of experience in IT business on multiple industries, domains, and roles.
View all posts by Vesa Nopanen



Source link

Ask Questions to Google Lens: New Feature Explained – Metaverseplanet.net

Ask Questions to Google Lens: New Feature Explained – Metaverseplanet.net


Google has introduced a new feature to its Lens application, allowing users to ask instant questions while recording videos. Expanding the capabilities of its visual search app, Google now enables both Android and iOS users to ask real-time questions about objects around them while using the video recording function in Lens. This new feature helps users quickly learn more about interesting things they encounter in everyday life. Lou Wang, Director of Product Management for Lens, stated that this feature is powered by Google’s Gemini model, part of the company’s artificial intelligence family.

Users will be able to instantly access the information they are curious about

As an example, Wang explains that if a user is curious about a fish, Lens can explain why the fish swims in circles and provide relevant sources.

To access the new video analysis feature, users must sign up for Google’s Search Labs program. Recording video in the Google app is as simple as pressing and holding the smartphone’s shutter button. Users will then be able to ask questions while recording, and Google’s AI Overview feature will summarize the information in response.

Additionally, Google Lens will enable users to search by both image and text simultaneously. For instance, when Lens identifies a product, it can provide details such as price, brand, reviews, and stock availability. Although this feature is currently limited to certain countries and shopping categories, it is anticipated that it will expand over time.

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

YouTuber MrBeast Gets Inspired from ‘Ready Player One’, To Release Videos Post Death

YouTuber MrBeast Gets Inspired from ‘Ready Player One’, To Release Videos Post Death


The internet’s biggest content creator MrBeast (real name Jimmy Donaldson) has revealed his plans to dominate YouTube even after his death. In a move that seems straight out of Steven Spielberg’s metaverse movie” Ready Player One,” where virtual reality creator James Halliday leaves behind a digital legacy, YouTube sensation MrBeast has revealed his own plans for a digital afterlife.

In a recent podcast with KSI and Logan Paul, MrBeast talked about YouTuber Nikocado Avocado, where he pranked the whole internet, by sharing his prerecorded videos over two years and suddenly surprised everyone with his transformation.

After news broke out of MrBeast planning to release videos post his death, many social media users were quick to point out the eerie similarity between MrBeast’s plans and the main plot of 2018 sci-fi movie ‘Ready Player One’ where a popular games creator announces a major in-game treasure hunt post his death, in a series of videos.

MrBeast revealed in the podcast that he has some similar plans in place. He has already filmed 15 videos that will be posted after he passes away. He mentioned that certain people in his company know where to find these videos on his old computer.

He added that the video was uploaded only once a month. However, MrBeast also clarified that the videos are pretty garbage. He said in a fun way that in one video he was just sitting at a table, opening an old fan email and he didn’t even know what it was.

In a characteristically playful tone, MrBeast mentions the series’ first video title will be “My Last Video” and these videos include direct addresses to his audience, saying, “I’m probably in a coffin right now just chilling, don’t feel bad for me, I’m dead.” 

The 26-year-old creator’s post-mortem content strategy has an uncanny similarity with the film “Ready Player One” concept of digital preservation. This content strategy maintains the illusion of MrBeast’s active presence in the audience —much like a modern-day Halliday.

MrBeast channel began in 2012 and boosted his subscriber base to 318 million, making him the biggest YouTuber in history. However, fame brings a lot of controversies also, recently he has indulged in a swirling controversy and a class action lawsuit.

In September 2024,  five individuals filed a lawsuit, alleging that MrBeast for contestant mistreatment, including claims of unpaid wages and poor working conditions. These controversies are related to his his upcoming Amazon Prime game show, “Beast Games” have a massive budget of $100 million.

Also Read: India Plans Massive Animation and Metaverse Push for Movies



Source link

HBO Claims to Have Found Bitcoin’s Creator in New Documentary – Metaverseplanet.net

HBO Claims to Have Found Bitcoin’s Creator in New Documentary – Metaverseplanet.net


HBO has announced that it will release a new documentary in one week, claiming to have uncovered the identity of Satoshi Nakamoto, the enigmatic creator of Bitcoin!

HBO, widely known for its association with Game of Thrones, has made headlines with the announcement of a documentary set to air in one week.

HBO Claims to Have Found Bitcoin's Creator in New DocumentaryHBO Claims to Have Found Bitcoin's Creator in New Documentary

The documentary, whose trailer has already been released, features excerpts from interviews with early Bitcoin users, including Blockstream founder Adam Back and JAN3 CEO Samson Mow.

According to Politico, the documentary claims to have solved the internet’s biggest mystery by uncovering the true identity of Satoshi Nakamoto, the elusive creator of Bitcoin.

In fact, if the findings are confirmed, it is claimed that they could send shockwaves through global financial markets and even impact the U.S. presidential elections.

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Metaverse Use Cases and Benefits 2024 | Vegavid

Metaverse Use Cases and Benefits 2024 | Vegavid


The Metaverse is an exciting new virtual world concept that has captured the imagination worldwide. Though still in the early development stages, it aims to enable fully immersive shared online experiences through innovations like virtual and augmented reality. While much discussion focuses on long-term possibilities, useful Metaverse applications are already emerging across different fields.

In this article, we will explore 10 potential real-world use cases for the Metaverse and how organizations are applying these innovative technologies today. By highlighting practical examples currently in pilot or scaling phases, we aim to demonstrate this new digital frontier is not just a futuristic concept – valuable applications exist now with more on the horizon. Let’s begin our exploration of the Metaverse’s diverse potential!

Online Meetings & Conferencing

With remote work here to stay, companies are testing virtual meeting rooms in the Metaverse. Spatial allows colleagues to chat face-to-face via virtual avatars, while Anthropic enables coworkers to collaboratively view and edit 3D models. These tools promote human connection missing on 2D video calls.

Virtual & Hybrid Events

Large events are moving online using Metaverse platforms. In 2021, music star Travis Scott held a popular concert on Fortnite watched by over 12 million players. Similarly, the social VR app VRChat hosted conferences, parties, and networking events inside virtual worlds. This blends physical and digital interactions.

Online Education

Educational applications of Metaverse focus on interactive learning and skill development. For instance, VR startup Foundry10 helps train industrial workers via virtual classrooms recreating real work environments. Meanwhile, Anthropic partners with universities to deliver online artificial intelligence courses inside 3D worlds.

Real Estate & Architecture

Property developers are exploring “Meta-estates” by designing and showcasing virtual buildings, neighborhoods, and smart cities. Platforms like Zygna allow viewing home listings via VR headsets, while Spatial’s virtual world incorporates residential and commercial real estate. This broadens marketing while prototyping new urban designs.

Retail & E-Commerce

Fashion brands like Nike and Louis Vuitton debuted virtual stores in gaming worlds and VR platforms. Decentraland created a digital shopping district for crypto clothing and art marketplace RTFKT. Customers can try on and purchase virtual apparel and accessories, potentially driving more online sales.

Gaming & Entertainment

As the most widely-used Metaverse to date, gaming platforms like Roblox and Fortnite are effectively social virtual worlds where creativity and entertainment blend. Users craft immersive games, worlds, concerts, and communities through these digital sandboxes reaching millions of players globally.

Travel & Tourism

The travel industry is tapping Metaverse’s potential by rebuilding famous landmarks and natural wonders. Within VR experiences, users can freely explore virtual recreations of places like the Great Wall of China or the Great Barrier Reef without constraints of physical distance. This sparks wanderlust and planning of future trips.

Healthcare & Wellness

Metaverse aims to enhance healthcare through telemedicine, employee wellness programs, and personalized medical simulations. For example, Anthropic develops AI coaching platforms for remote doctor consultations and therapy sessions inside virtual clinics. It also facilitates health education and training of medical professionals through virtual reality.

Sports & Fitness

Sports leagues and gyms are trialing immersive fitness activities within Metaverse platforms. For instance, the VR fitness app Supernatural offers boxing, dance, and meditation classes led by instructors appearing as virtual avatars. Its goal is to motivate exercise through social interaction and gamified challenges anywhere users have VR headsets.

Work & Productivity

Forward-thinking companies pioneered adopting virtual and augmented reality within core operations. For example, industrial giants like BMW, Dassault, and Volkswagen utilize virtual design spaces to accelerate product development cycles. Construction firms too deploy mixed reality for remotely managing collaborative building projects worldwide. As Metaverse evolves, more organizations may integrate these tools into workflow to streamline processes.

As illustrated above, Metaverse applications are being applied today across varied fields from social networking to e-commerce, healthcare, property, and beyond, and aiming to make a positive impact. While challenges remain in areas like scalability, hardware costs, and data privacy, ongoing technical progress indicates the Metaverse’s full realization as a generative shared virtual economy may unfold sooner than many expect. Innovators are continually reimagining how advancing digital realities can empower areas of our lives.

The emerging Metaverse concept represents a paradigm shift in how we interact and transact in virtual spaces. This fully immersive next generation of the internet aims to push boundaries of productivity, creativity, and connectivity using tools like virtual and augmented reality. While still maturing, the Metaverse is sparking excitement over opportunities for industries and enterprises. In this article, we explore the top potential benefits businesses may gain by embracing this innovative technological frontier.

Online Meetings & Conferencing

Business operations today heavily rely on remote collaboration via video calls. However, the Metaverse opens new possibilities for virtual face-to-face communication through avatar representation in shared digital workspaces. Platforms like Anthropic enable synchronous co-editing of 3D designs alongside live audio conferencing, promoting higher engagement absent in typical 2D virtual meetings. This blended virtual-physical interaction model can boost productivity for globally distributed teams.

Virtual & Hybrid Events

Corporate conferences, trade shows and product launches transitioning online faced challenges in reaching intended scale and immersion through conventional web platforms. However, virtual venues within the Metaverse like Decentraland provide interactive shared spaces perfect for organizing everything from presentations to networking sessions to virtual vendor booths without travel constraints. Furthermore, these hybrid virtual-physical events could combine in-person activities with extended online participation through Metaverse access, significantly broadening potential audiences.

Training & Onboarding

Induction of new hires and continuous skills training remain pivotal yet face resource restraints. Leveraging the Metaverse promises to transform this process. Tools like VR let employers remotely impart practical job skills via interactive virtual simulations that effectively mimic real conditions. For example, construction firms could host recruits inside VR versions of active development sites to preview health & safety protocols before commencing physical work. Similarly, healthcare organizations may impart delicate procedures through interactive virtual patients. This approach could streamline both initial and ongoing training workflows.

Remote Assistance

Augmented reality promises to revolutionize helping remote workers troubleshoot issues by overlaying visual data directly onto real environments. Pilot programs see technicians utilizing AR smartglasses to virtually guide on-site staff through repair processes from afar by viewing live video feeds and drawings directly over real-world components. This could minimize downtime from machinery failures while reducing service costs and travel requirements compared to dispatching experts on-site. As technologies mature, the Metaverse may enable infinitely scalable remote collaborative assistance globally.

Digital Marketing & Advertising

Digital marketing or creative marketing within virtual worlds allows unique brand engagement far beyond conventional banners or videos. Major companies are already securing virtual real estate within Metaverse platforms to construct immersive 3D stores and host experiential promotional events. For instance, clothing brands offer custom virtual apparel try-ons that convert online window shoppers. Additionally, product placements within hugely popular virtual worlds offer unprecedented viral exposure potential for brands. The multimedia advertising formats could redefine how companies create digital customer relationships.

Remote Work Collaboration

While remote work boomed amid pandemic lockdowns, maintaining authentic team culture proved challenging through typical video calls alone. The Metaverse provides fully immersive digital HQs with interactive virtual meeting spaces where colleagues can brainstorm together live as avatar representations. Features like integrated whiteboards, 3D modeling, and spatial audio help simulate true office dynamics remotely. This may improve collaboration quality and job satisfaction for distributed hybrid work teams long-term.

As the Metaverse evolves from basic pilots today, hybrid virtual-augmented technologies will integrate ever deeper into our daily business operations and open new creative processes. Done responsibly with user wellness prioritized, advances could streamline workflows, spark innovation, and uniquely empower global connectivity for enterprises. Looking ahead, realizing its full societal impacts will require focused cooperation between innovators and policymakers to ensure humanity’s best interests guide this profound technological shift.

Conclusion

While still nascent, the Metaverse represents a paradigm shift with enormous potential to positively transform core business processes from remote work to marketing to skills development if shaped sustainably. Early applications demonstrate its capacity to drive high productivity and engagement through vivid shared virtual experiences beyond the restrictions of physical distance alone. By creatively piloting innovative use cases now, organizations stand positioned to aptly benefit as this vibrant frontier further matures. With care and reason applied, the Metaverse age could inspire unprecedented progress.

The Metaverse concept demonstrates immense potential if responsibly developed and applied. As a persistent, interconnected digital space, it could revolutionize how we work, learn, entertain, and connect in the future. Early pilots highlight its ability to positively transform industries and daily experiences through innovations like virtual collaboration tools, immersive online events, and interactive virtual simulations. Going forward, balancing human needs with responsible development will be key to ensuring the Metaverse uplifts humanity. There are great possibilities ahead as technical boundaries push creative imaginations further.



Source link

Streamlined Image Sharing: Gemini’s New Android Feature – Metaverseplanet.net

Streamlined Image Sharing: Gemini’s New Android Feature – Metaverseplanet.net


Gemini introduces an exciting new feature for Android users, allowing them to share images directly from other apps. With the latest update, users can now easily add visual content alongside text commands. Version 1.0.668480831 of Gemini enables users to import images from Google Photos or other Android apps straight into the Gemini app.

By selecting the Gemini icon from Android’s share menu, users can quickly and conveniently share their pictures. However, this functionality is currently limited to visuals, as links and text cannot be transferred to Gemini.

One-touch picture sharing can be done

Streamlined Image Sharing: Gemini's New Android FeatureStreamlined Image Sharing: Gemini's New Android Feature

Previously, adding an image to Gemini required opening the app, selecting the image from the gallery, and then adding the text prompt. This process could be cumbersome, especially if the image was stored in cloud services or other apps. Now, users can streamline the process by simply tapping the Gemini icon.

However, there is one limitation: this new feature does not work within the overlay interface, requiring users to exit their current applications. While the latest Gemini update improves image sharing efficiency, the restriction with the overlay interface may be inconvenient for some users. Nonetheless, this innovation has significantly enhanced Gemini’s functionality.

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Transforming Gaming: The Impact of Web3 Technology – Metaverseplanet.net

Transforming Gaming: The Impact of Web3 Technology – Metaverseplanet.net


While the traditional gaming industry has continuously evolved over the years, the rise of Web3 technology has ushered in an entirely new era of gaming. Web3, a collection of decentralized, blockchain-based technologies, is leading to revolutionary changes in the gaming industry. In this article, we will explore how Web3 is transforming the gaming world and its potential for the future.

Why is Web3 important for the gaming industry?

Transforming Gaming: The Impact of Web3 TechnologyTransforming Gaming: The Impact of Web3 Technology

Web3 is crucial for the gaming industry because it enables true ownership and trading of in-game assets through decentralized, blockchain-based technologies. This allows players to convert in-game assets into real money and trade them on decentralized markets, making in-game economies more dynamic and vibrant. Additionally, Web3’s integration of NFTs (Non-Fungible Tokens) facilitates the creation of tokens that represent unique digital ownership of in-game assets. This enables players to collect and trade rare or unique items, adding more value and appeal to the gaming experience.

What innovations does Web3 bring to the gaming industry?

Web3 is fundamentally transforming the gaming industry with innovations such as decentralized economies, the integration of NFTs, and increased player participation in game development.

Decentralized Game Economies: Web3 allows players to own and trade in-game assets in a decentralized way, providing true ownership and value. Players can convert their in-game assets into real-world money and trade them outside the game environment.

Integration of NFTs (Non-Fungible Tokens): NFTs represent unique digital ownership of in-game assets. Web3 enables the tokenization of these assets as NFTs, allowing players to buy, sell, and trade rare or unique items.

Fungible Game Items: Web3 allows for in-game assets to be freely traded on decentralized marketplaces. Players can trade assets between different games and use the items they’ve earned across multiple gaming platforms.

Increased Player Engagement and Influence: Web3 fosters greater player involvement in the game development process, giving players more influence over the future of games. Decentralized gaming communities and governance models empower players to contribute directly to the development of gaming ecosystems.

What is Web3’s potential for the future?

Real-Time Economies and Gameplay: Web3 enables players to buy and sell their in-game assets in real-time, making gaming economies more dynamic and engaging.

Increased Freedom and Flexibility in Games: Web3 offers both developers and players greater flexibility, allowing for the creation of new game development models and innovative gaming experiences.

Integration of Real and Virtual Worlds: Web3 blurs the boundaries between the real world and virtual worlds, enriching the gaming experience. For instance, real-world events can be integrated into games, or in-game assets could be linked to real-world counterparts.

Web3 is emerging as a powerful technology with the potential to transform the gaming industry. Innovations such as decentralized economies, the integration of NFTs, fungible game items, and increased player participation are driving significant changes in how games are developed and played. The full potential of Web3 in the gaming world remains to be seen, but it is expected to grow and evolve further in the coming years.

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Web3 Games Accelerate with Lamborghini’s Fast ForWorld – Metaverseplanet.net

Web3 Games Accelerate with Lamborghini’s Fast ForWorld – Metaverseplanet.net


Luxury automaker Lamborghini has teamed up with Web3 gaming giant Animoca Brands to create a new blockchain platform called “Fast ForWorld.” This platform brings Lamborghini’s iconic vehicles into the digital realm, offering players a unique experience. Available within the Motorverse ecosystem, these digital Lamborghinis are redefining the boundaries of both the automotive and gaming industries.

The automotive industry continues to explore the opportunities provided by blockchain technology. Lamborghini’s Web3 initiative is seen as the latest example of luxury automakers adapting to digital transformation. The “Fast ForWorld” platform gives Lamborghini enthusiasts the chance to experience their favorite vehicles in a virtual world, while also showcasing the potential of Web3 technology in the automotive sector.

Lamborghini and Animoca Brands Announce Web3 Partnership

Web3 Games Accelerate with Lamborghini's Fast ForWorld

Lamborghini has announced a partnership with Web3 gaming company Animoca Brands, bringing the luxury car brand’s first interoperable blockchain-based digital vehicles to the gaming world. This new platform, called Fast ForWorld, allows Lamborghini vehicles to be used in blockchain games.

Motorverse, a subsidiary of Animoca Brands, serves as the infrastructure for the Fast ForWorld platform. Through this ecosystem, players can buy, sell, and use Lamborghini vehicles in various games. Popular titles like Torque Drift 2, REVV Racing, and Motorverse Hub are among the platforms where these digital Lamborghinis can be utilized.

The first version of Fast ForWorld is set to launch on November 7. The platform will provide users with a 3D wallet, enabling them to securely store their digital assets and other items. Additionally, players will be able to interact with their digital Lamborghinis through this wallet and use them in games.

This is not Lamborghini’s first venture into the Web3 space. The brand previously partnered with Animoca Brands on August 8 to explore new Web3-based brand engagement initiatives and offer unique digital experiences to Lamborghini fans.

Yat Siu, co-founder and chairman of Animoca Brands, highlighted the significance of this partnership, stating that Lamborghini was one of the first brands to adopt interoperability standards with Motorverse. The collaboration aims to create a truly interconnected Web3 gaming experience for motorsport fans and gamers.

Thanks to the unique features of blockchain technology, the Fast ForWorld platform allows players to buy, sell, and use digital Lamborghini vehicles across different games. This creates a new digital economy model, showcasing how the cryptocurrency economy can intersect with the automotive and gaming industries.

Lamborghini’s initiative exemplifies how luxury brands can embrace Web3 technologies. Beyond just providing a gaming experience, this platform leverages the transparency, security, and interoperability of blockchain technology to create an environment where digital assets hold real value.

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Unveiling the Humanoid Robot Fourier GR-2: Key Details – Metaverseplanet.net

Unveiling the Humanoid Robot Fourier GR-2: Key Details – Metaverseplanet.net


The humanoid robot Fourier GR-2 has been making headlines recently. Here are the most important and striking details about this much-discussed topic…

As exciting developments in robotics continue, China-based Fourier Intelligence has advanced its GR series of humanoid robots with the introduction of the GR-2 model. Building on last year’s GR-1, the GR-2 offers substantial improvements both physically and functionally. Standing at 175 cm and weighing 63 kg, the GR-2 is designed to be at eye level with most adults. Unlike the earlier models of the GR-1, which had a slim and skeletal appearance, the GR-2 features a more polished and stylish exterior. Its delicate internal components are now protected by a durable plastic casing.

Humanoid robot Fourier GR-2 is on the agenda

One of the standout features of the GR-2 is its upgraded motors. The actuators, which produced 300 Nm of torque in the GR-1, have been enhanced to 380 Nm in the GR-2, significantly boosting the robot’s lifting capacity. Although Fourier has not fully disclosed the exact capacity, it is speculated that the GR-2 surpasses the GR-1 in this area, as the GR-1 could lift nearly its own weight. Additionally, the GR-2 can walk at a speed of 5 km/h.

One of the GR-2’s most notable innovations is its hands. With 12 degrees of freedom and the ability to sense tactile force, these hands allow the robot to instantly adjust its grip strength by “feeling” the shape and material of the objects it touches. While these electrically powered hands may be slower and less powerful than the hydraulic systems used in other humanoid robots, they have the potential to offer precise and safe handling, especially when paired with advanced artificial intelligence algorithms.

Currently, the GR-2’s hands can carry weights of up to 6 kg. As a result, it is stated that the robot is not designed for industrial tasks like heavy lifting but rather for providing in-home assistance to elderly or disabled individuals. In areas with aging populations or labor shortages, these robots are seen as a significant solution.

The AI systems integrated into the GR-2 offer major advancements in learning and interaction. The robot can be controlled in various ways, including remote control (telepresence), VR commands, or direct manipulation of its limbs. Through a technique called “lead-through programming,” users can physically guide the robot to teach it new tasks.

The GR-2 is compatible with widely-used open-source robotics development software such as ROS, Mujoco, and Nvidia’s Isaac Lab, making it an ideal platform for both commercial and academic research and development projects.

AI-powered humanoid robots are rapidly evolving and seem poised to play a larger role in our daily lives soon. Robots like the GR-2 hold great potential in domestic care and support. However, the biggest challenge remains getting robots to interact with the real world in a safe, effective, and practical manner. Humanoid robots are not yet ready for widespread daily use, but with the fast pace of AI advancements, progress in this area is expected to continue at a similar speed.

What are your thoughts on this? Share your opinion in the comments!

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Harvard Students Unveil Credential Project with Meta Glasses – Metaverseplanet.net

Harvard Students Unveil Credential Project with Meta Glasses – Metaverseplanet.net


Two Harvard students have developed a project that utilizes Meta’s Ray-Ban smart glasses to expose people’s credentials instantly. The demonstration highlights how facial recognition technology and public databases can be used in potentially dangerous ways. Anh Phu Nguyen and Caine Ardayfio leverage the glasses’ live broadcasting capabilities on Instagram, employing a technology called I-XRAY. This artificial intelligence system identifies faces in real-time feeds and retrieves personal information such as names, addresses, and phone numbers. The collected data is then sent to users through a mobile app.

The security hazard with facial recognition technology

Harvard Students Unveil Credential Project with Meta GlassesHarvard Students Unveil Credential Project with Meta Glasses

In Nguyen and Ardayfio’s demonstration, the students are shown identifying both classmates and strangers on public transportation using the smart glasses. While the accuracy of facial recognition technology is already well-established, the integration of this technology with a readily accessible device like the Meta glasses significantly amplifies the potential for misuse. The widespread availability of face search engines, such as PimEyes, further democratizes access to such technologies.

The students assert that they do not intend to use their project for malicious purposes and will refrain from publishing it. They emphasize that their primary objective is to raise awareness about the dangers posed by current technologies. Their work serves as a reminder that individuals should take steps to protect their digital assets from being easily accessed. However, it is essential to recognize that completely erasing digital information is nearly impossible.,

You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Popular Posts

My Favorites

Bitcoin Minimum Fee Rate Slashed by 90%—Is That a Good Thing?...

In brief You can now get Bitcoin transactions added to the blockchain for a lot cheaper than before. Mining pools cut the rate as Bitcoin...