Amazon updates AI enterprise solutions with guardrails at AWS Summit

Estimated read time 7 min read


img-0700

Radhika Rajkumar/ZDNET

Amazon held its annual AWS (Amazon Web Services) Summit at the Javits Center in New York City on Wednesday. The event focused on the cloud computing giant’s latest work in generative artificial intelligence (AI), and featured partner expos, workshops, talks, and a keynote address. 

Across all of its announcements, AWS emphasized accelerating developer productivity and making scalable generative AI available to more organizations. It also focused on security and responsibility, spotlighting its partnership with Anthropic. 

During the Summit keynote address, Dr. Matt Wood, vice president for AI Products at AWS, noted that customers from regulated industries and the public sector grow fastest on AWS, because their compliance efforts set a strong foundation for generative AI. They are well prepared to meet security requirements at the beginning of their AI build. Wood reiterated that security is baked into AWS gen AI applications “from day one,” and that it’s the company’s highest priority. 

Here are the biggest highlights from AWS Summit 2024. 

AWS App Studio

img-0843

Radhika Rajkumar/ZDNET

One of the Summit’s breakout announcements was App Studio, which is currently in public preview. The new gen-AI-powered platform allows technical professionals to build comprehensive apps using natural language descriptions and prompts. Users can specify the app’s function and the data sources it should pull from, and App Studio will produce an app “that could have taken a professional developer days to build from scratch” in minutes, the release states. 

In a demo at AWS Summit, Amazon showed ZDNET how App Studio can take a request for an invoice tracking app, for example, and lay out suggestions for how it should work. Once the user approves the overview and App Studio creates the app, the user can edit it with easy-to-use drag-and-drop functions before deploying it. 

App Studio also integrates with third-party services and AWS through connectors. Adam Seligman, vice president of developer experience at AWS, told ZDNET at the summit that the company anticipates App Studio will evolve to feature more integrations based on customer feedback. 

Amazon Q updates 

img-0799

Radhika Rajkumar/ZDNET

Amazon announced several updates to Q, the company’s enterprise AI assistant, emphasizing support for developers. After becoming generally available in April, Amazon Q Developer is now also available in Sagemaker, Amazon’s machine learning (ML) developer environment. Previously, Q was only available in the AWS console and other developer environments like IntelliJ IDEA, Visual Studio, and VS Code. Amazon Q provides product guidance, generates code, and can help developers troubleshoot issues. 

During the keynote, Wood called the integration a “step function change in the ease of use” for organizations to accelerate their ML workloads. 

“In SageMaker Studio, data scientists and ML engineers get all the existing capabilities of Q like code generation, debugging, and advice, as well as specialized assistance for tasks like data preparation, model training, and model deployment,” Swami Sivasubramanian, VP of AI & Data, explained in a statement. 

Developers can ask Q how to fine-tune their LLM and Q will return a set of instructions, complete with sample code. Q can also advise users on what approach to take to data preparation based on use case, code or no-code preference, and data format. 

Another reveal was that Amazon Q Apps, a feature of Amazon Q Business, is now generally available. The Amazon Q Business feature lets employees create apps using their company data by sending Q a descriptive prompt in natural language. Employees can also generate a reusable app from a conversation with the assistant for tasks like “summarizing feedback, creating onboarding plans, writing copy, drafting memos, and more,” Sivasubramanian continued in the release. 

The release follows the trend of implementing AI assistants across industries and skill levels to offload work from all kinds of employees.  

Amazon Bedrock updates 

The company also announced updates to Bedrock, its enterprise platform for building and scaling generative AI applications. Bedrock provides a broad range of models for every use case, allowing companies to build with one or multiple based on their needs.

As of Wednesday, users can fine-tune Anthropic’s Claude 3 Haiku model in Bedrock. Fine-tuning allows organizations to specify models for their needs and use cases, making customization easier. This capability is currently in preview, and it is the first time Claude 3 models have been available for fine-tuning. 

As we know, better data means better generative AI output. Amazon is adding new data sources to Knowledge Bases for Amazon Bedrock, including connectors for Confluence, SharePoint, and Salesforce, as well as custom web sources and improved accuracy for CSV and PDF data. This allows organizations to further customize their models with more business data. Knowledge Bases already connects to private sources like Amazon Aurora, MongoDB, Pinecone, and more. 

New capabilities for Agents

Amazon announced two new capabilities for Agents in Amazon Bedrock: memory retention and code interpretation, both of which improve customization. 

With memory retention, Agents can now remember where a user query last left off, whereas previously, they were limited to the information they had from a single session. For example, Agents can now reference information from previous interactions, like the last time you traveled when booking a flight.

Wood noted in the keynote that AWS customers are interested in having agents performing complex analytics, beyond simple automated tasks. To address this, AWS leveraged Agents’ ability to write code; they can now generate and execute code in a sandboxed environment. This allows agents to analyze data and create graphs “to tackle complex data-driven use cases, such as data analysis, data visualization, text processing, solving equations, and optimization problems,” Sivasubramanian said in the release. For instance, this capability could allow for the analysis of real estate price data to help make investment decisions.

Code interpretation is limited to a sandboxed environment to avoid potential chaos from agents creating and executing unvetted code. Amazon also noted that users can directly upload documents, making instructing agents more straightforward. 

Updates to Guardrails 

img-0821

Radhika Rajkumar/ZDNET

To address customer concerns about hallucinations when using generative AI, Amazon announced contextual grounding checks within Guardrails for Bedrock, the company’s existing set of generative AI parameters for reducing harmful output. Contextual grounding checks will detect and block hallucinations in model responses for customers using RAG and summarization. The feature also ensures a model’s response can be attributed to the correct enterprise source data and is relevant to the user’s original query. 

Bedrock already provides filters for words, topics, harmful content, and personal data — this update builds on blocking information by addressing hallucinations themselves. AWS reports that Guardrails filters up to 75% of hallucinations. AWS also announced a progress update on its responsible AI initiatives, which include visual watermarking efforts, policy guidelines, and usage training resources. 

Guardrails exist in Bedrock, but to extend its responsible AI reach, AWS also announced an API version of the feature, which will let customers implement the feature across foundation models that aren’t hosted by Bedrock. 

Other AI announcements

As part of AI Ready, Amazon’s free cloud computing skills training initiative, the company also announced AWS SimuLearn, an interactive learning platform that “pairs generative AI-powered simulations and hands-on training, to help people learn how to translate business problems into technical solutions,” said Sivasubramanian in the announcement. Amazon noted that it has beat its 2025 goal of training 29 million people around the world, having reached 31 million as of July 2024. 





Source link

You May Also Like

More From Author

+ There are no comments

Add yours