The Only Artificial Intelligence Stocks Investors Should Buy and Hold

Companies like VERSES AI Inc. (CBOE: VERS) (OTCQX: VRSSF) are racing toward Artificial General Intelligence (AGI), where AI can perform all human cognitive skills better than the smartest human, as noted by Forbes. In fact, as also noted by TechTarget.com, “AGI should theoretically be able to perform any task that a human can and exhibit a range of intelligence in different areas without human intervention. Its performance should be as good as or better than humans at solving problems in most areas.” Not only is that powerful news for VERSES AI, but also for Nvidia (NASDAQ: NVDA), Microsoft (NASDAQ: MSFT), C3 AI (NYSE:AI) and Amazon.com (NASDAQ: AMZN) to name a few of the top ones.

Look at VERSES AI Inc. (CBOE: VERS) (OTCQX: VRSSF), For Example

VERSES AI Inc., a cognitive computing company specializing in next generation intelligent systems announces that a team, led by Chief Scientist, Dr. Karl Friston, has published a paper titled, “From pixels to planning: scale-free active inference,” which introduces an efficient alternative to deep learning, reinforcement learning and generative AI called Renormalizing Generative Models (RGMs) that address foundational problems in artificial intelligence (AI), namely versatility, efficiency, explainability and accuracy, using a physics based approach.

‘Active inference’ is a framework with origins in neuroscience and physics that describes how biological systems, including the human brain, continuously generate and refine predictions based on sensory input with the objective of becoming increasingly accurate. While the science behind active inference has been well established and is considered to be a promising alternative to state of the art AI, it has not yet demonstrated a viable pathway to scalable commercial solutions until now. RGM’s accomplish this using a “scale-free” technique that adjusts to any scale of data.

“RGMs are more than an evolution; they’re a fundamental shift in how we think about building intelligent systems from first principles that can model space and time dimensions like we do,” said Gabriel René, CEO of VERSES. “This could be the ‘one method to rule them all’; because it enables agents that can model physics and learn the causal structure of information we can design multimodal agents that can not only recognize objects, sounds and activities but can also plan and make complex decisions based on that real world understanding—all from the same underlying model. This promises to dramatically scale AI development, expanding its capabilities, while reducing its cost.”

The paper describes how Renormalized Generative Models using active inference were effectively able to perform many of the fundamental learning tasks that today require individual AI models, such as object recognition, image classification, natural language processing, content generation, file compression and more. RGMs are a versatile “universal architecture” that can be configured and reconfigured to perform any or all of the same tasks as today’s AI but with far greater efficiency. The paper describes how an RGM achieved 99.8% accuracy on a subset of the MNIST digit recognition task, a common benchmark in machine learning, using only 10,000 training images (90% less data). Sample and compute efficiency translates directly into cost savings and development speed for businesses building and employing AI systems. Upcoming papers are expected to further demonstrate the effective and efficient learning of RGMs and related research applied to MNIST and other industry standard benchmarks such as the Atari Challenge.

"The brain is incredibly efficient at learning and adapting and the mathematics in the paper offer a proof of principle for a scale-agnostic, algorithmic approach to replicating human-like cognition in software," said Dr. Friston. Instead of conventional brute-force training on a massive number of examples, RGMs “grow” by learning about the underlying structure and hidden causes of their observations. "The inference process itself can be cast as selecting (the right) actions that minimize the energy cost for an optimal outcome," Friston continued.

Your brain doesn't process and store every pixel independently; instead it “coarse-grains” patterns, objects, and relationships from a mental model of concepts - a door handle, a tree, a bicycle. RGMs likewise break down complex data like images or sounds into simpler, compact, hierarchical components and learn to predict these components efficiently, reserving attention for the most informative or unique details. For example, driving a car becomes “second nature” when we’ve mastered it well enough such that the brain is primarily looking for anomalies to our normal expectations.

By way of analogy, Google Maps is made up of an enormous amount of data, estimated at many thousands of terabytes, yet it renders viewports in real time even as users zoom in and out to different levels of resolution. Rather than render the entire data set at once, Google Maps serves up a small portion at the appropriate level of detail. Similarly, RGMs are designed to structure and traverse data such that scale – that is, the amount, diversity, and complexity of data – is not expected to be a limiting factor.

“Within Genius, developers will be able to create a variety of composable RGM agents with diverse skills that can be fitted to any sized problem space, from a single room to an entire supply network, all from a single architecture,” says Hari Thiruvengada, VERSES's Chief Product Officer.

Further validation of the findings in the paper is required and expected to be presented in future papers slated for publication this year. Thiruvengada adds, “We’re optimistic that RGMs are a strong contender for replacing deep learning, reinforcement learning, and generative AI.”

The full paper is expected to be published on arxiv.org later this week. A webinar featuring Professor Karl Friston discussing the landmark paper is expected to be announced in August.

Other related developments from around the markets include:

Nvidia and Hewlett Packard announced NVIDIA AI Computing by HPE, a portfolio of co-developed AI solutions and joint go-to-market integrations that enable enterprises to accelerate adoption of generative AI. Among the portfolio’s key offerings is HPE Private Cloud AI, a first-of-its-kind solution that provides the deepest integration to date of NVIDIA AI computing, networking and software with HPE’s AI storage, compute and the HPE GreenLake cloud. The offering enables enterprises of every size to gain an energy-efficient, fast and flexible path for sustainably developing and deploying generative AI applications. Powered by the new OpsRamp AI copilot that helps IT operations improve workload and IT efficiency, HPE Private Cloud AI includes a self-service cloud experience with full lifecycle management and is available in four right-sized configurations to support a broad range of AI workloads and use cases.

Microsoft will publish fiscal year 2024 fourth-quarter financial results after the close of the market on Tuesday, July 30, 2024, on the Microsoft Investor Relations website at https://www.microsoft.com/en-us/Investor/. A live webcast of the earnings conference call will be made available at 2:30 p.m. Pacific Time.

C3 AI, the Enterprise AI application software company, today announced C3 Generative AI for Government Programs, a generative AI application that helps federal, state, and local governments quickly deliver accurate information to the public about government programs ranging from healthcare, employment, financial assistance, and more. The application streamlines access and comprehension of complex government programs and processes for hundreds of millions of citizens and residents, helping them navigate systems and services with ease. With C3 Generative AI for Government Programs, federal, state, or local government agencies can eliminate service delays, reduce wait times, make contact centers more effective, and improve the citizen experience. By empowering the public to find answers directly through an intuitive search and chat interface, support centers will see decreased inquiry volumes and strain, allowing service representatives to focus on more complex cases and inquiries.

Amazon Web Services, Inc. (AWS), an Amazon.com company, announced the launch of AWS GenAI Lofts, a global tour designed to foster innovation and community in the evolving landscape of generative artificial intelligence (AI) technology. This initiative will bring collaborative pop-up spaces to key AI hotspots around the world, offering developers, startups, and AI enthusiasts a platform to learn, build, and connect. Visitors will benefit from immersive experiences showcasing cutting-edge generative AI projects, workshops, fireside chats, and hands-on programming from AI experts, community groups, AWS Partners, including Anthropic, Cerebral Valley, Weights & Biases, and venture capital investors.

Legal Disclaimer / Except for the historical information presented herein, matters discussed in this article contains forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Winning Media is not registered with any financial or securities regulatory authority and does not provide nor claims to provide investment advice or recommendations to readers of this release. For making specific investment decisions, readers should seek their own advice. Winning Media is only compensated for its services in the form of cash-based compensation. Pursuant to an agreement Winning Media has been paid three thousand five hundred dollars for advertising and marketing services for VERSES AI Inc. by VERSES AI Inc. We own ZERO shares of VERSES AI Inc. Please click here for disclaimer.

Contact:

Ty Hoffer
Winning Media
281.804.7972
[email protected]