How Red Hat is embracing AI to make sysadmin lives easier

Estimated read time 5 min read


codecrackgettyimages-1081869346

Jay Yuno/Getty Images

I’ll be so glad once every technology or business press release no longer begins, “Now with AI!” Most of the time, it’s just lip service. And then there’s Red Hat, which is integrating AI across its product line. This includes Red Hat Enterprise Linux AI (RHEL AI); Red Hat OpenShift AI; and Red Hat Ansible Automation. Here’s what each one does and how they fit together.

Red Hat was working with AI well before this rush of announcements. Red Hat’s first serious AI work was with Red Hat Lightspeed, a generative AI service with an automation-specific foundation model. Lightspeed, which uses natural language processing (NLP) to turn prompts into code, first appeared in the Ansible DevOps program, where it helped to simplify complex system administration jobs. In particular, it was designed to demystify the creation of Ansible Playbooks.

Also: IBM open-sources its Granite AI models – and they mean business

Moving forward, RHEL AI is Red Hat’s foundational AI platform. Currently only a developer’s preview, RHEL AI is designed to streamline generative AI model development, testing, and deployment. This new platform fuses IBM Research’s open-source-licensed Granite large language model (LLM) family, the LAB methodology-based InstructLab alignment tools, and a collaborative approach to model development via the InstructLab project.

IBM Research pioneered the LAB methodology, which employs synthetic data generation and multi-phase tuning to align AI/ML models without costly manual effort. The LAB approach, refined through the InstructLab community, enables developers to build and contribute to LLMs just as they would to any open-source project.

With the launch of InstructLab, IBM has released select Granite English language and code models under an Apache license, providing transparent datasets for training and community contributions. The Granite 7B English language model is now integrated into InstructLab, where users can collaboratively enhance its capabilities.

RHEL AI is meant to simplify enterprise-wide adoption by offering a fully optimized, bootable RHEL image for server deployments across hybrid cloud environments. These optimized bootable model runtime instances work with Granite models and InstructLab tooling packages. They include optimized Pytorch runtime libraries and GPU accelerators for AMD Instinct MI300X, Intel and NVIDIA GPUs, and NeMo frameworks.

The RHEL AI is also integrated within OpenShift AI, Red Hat’s machine learning operations (MLOps) platform, allowing for large-scale model implementation in distributed clusters.

Also: The Linux Foundation and tech giants partner on open-source generative AI enterprise tools

That’s one side of RHEL AI. Another is that it will use Lightspeed to help you deploy, manage, and maintain your RHEL instances. For example, at Red Hat Summit, Red Hat demoed how it could check for Common Vulnerability and Exploit (CVE) security patches, and you could then tell your system to go ahead and implement the patch. 

Next up: OpenShift AI, which includes RHEL AI, enables enterprises to scale workflows and manage models via Kubernetes-powered MLOps. IBM’s watsonx.ai enterprise studio users will benefit from this integration by gaining access to improved model governance and pricing.

Like RHEL AI, this new AI-friendly version of OpenShift — Red Hat Kubernetes distro — includes Lightspeed to make OpenShift easier to use. For instance, it will recommend how to deploy new applications, when to use autoscaling, and the appropriate sizes for cloud instances. Further, it will monitor your application; after the app has been up and running for a while, Lightspeed will autoscale the app’s resources down if the capacity requirements are lower than expected.

In short, said Ashesh Badani, Red Hat’s senior VP and chief product officer, “Red Hat Lightspeed puts production-ready AI into the hands of the users who can deliver the most innovation more quickly: the IT organization.”

Also: Why open-source generative AI models are still a step behind GPT-4

Finally, in Ansible, Red Hat has added  “policy as code” to its bag of tricks. Why? Sathish Balakrishnan, Ansible’s VP and general manager, explained that as AI scales the capabilities of individual systems beyond what we can manage, the challenge of maintaining IT infrastructure has grown ever larger. 

From where Balakrishnan sits, “AI is the final stage of the automation adoption journey. In the context of enterprise IT ops, AI means machines automating processes, connecting infrastructure and tools to make them more efficient, and making decisions to improve resiliency and reduce costs.”

By using AI to automate Policy as Code, Red Hat sees the new Ansibile efficiently enforcing internally and externally mandated policies at the start of a new IT project and then managing its operations at scale. 

If it strikes you that there’s a single theme here uniting all the programs, you’re right. Red Hat is using AI to make life easier for its system’s administrations. Yes, you’ll be able to build AI programs on RHEL and OpenShift, but for the near-term future, Red Hat AI is all about integrating the entire Red Hat software family into an easy-to-manage, smart software stack for all its customers.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours