Superhuman Agents

🗞️ The Tech Issue | February 1, 2024

In partnership with Amazon Web Services

Join this webinar to explore how to establish effective Observability in dynamic, continuously deployed, and decoupled systems. You’ll discover where and what to instrument across your SDLC and how to leverage clear insights in today's evolving tech environment.

You will learn how to:

🟠 Choose an optimal data type and service to monitor the progression of your applications throughout the SDLC.
🟠 Enhance your ability to store, query, and visualize system telemetry for more effective insights into your systems.
🟠 Enable transparency and address issues proactively before they impact operations to systematically enhance value delivery and resiliency.
🟠 Use tools from AWS and the broader DevOps tool landscape using AWS Marketplace to establish a comprehensive Observability practice.

☕️ Greetings, and welcome to my daily dive into the Generative AI landscape.

In today’s issue:

  • Superhuman Agents

  • Prompt Engineering is Different for Open Source LLMs

  • General Assembly Launches Suite of Upskilling Programs to Prepare Businesses for an AI-Driven Economy

  • Can ChatGPT drive my car? The case for LLMs in autonomy

  • Sam Altman’s South Korea Visit Signals OpenAI’s Ambition to Develop Own AI Chips with Samsung

  • And more

🔔 Please forward this newsletter to your friends and team members and invite them to join. This will help me grow my reach. Thanks, Qamar.

🗞️ Superhuman Agents

Meta and New York University researchers have introduced "Self-Rewarding Language Models" (SRLMs), a groundbreaking approach in AI language model training. These models distinguish themselves by generating their own rewards during training, which fosters continuous self-improvement. This method diverges from traditional human-dependent feedback models like RLHF and DPO, potentially surpassing human performance levels. Using Meta's Llama 2 70B as a base, SRLMs self-evaluate and adapt, demonstrating a significant performance leap in initial tests. However, uncertainties remain regarding their real-world application, the potential for 'reward hacking', and the need for human evaluation to validate their effectiveness.

Summary:

  1. Innovative Concept: SRLMs are designed to self-improve by generating their rewards, contrasting with conventional human-reliant methods.

  2. Self-Evaluation Mechanism: These models use their responses to queries as a basis for self-assessment and subsequent self-improvement, bypassing human input.

  3. Performance Edge: In preliminary tests, the Llama 2 70B-based SRLM outperformed established models like Claude 2 and GPT-4 0613.

  4. Benchmarking Success: The model showed notable enhancements in instruction-following capabilities, as per the AlpacaEval 2.0 benchmark.

  5. Longer Outputs: There's a trend towards lengthier responses in the self-rewarding models, raising questions about efficiency and practicality.

  6. Future Research Directions: Further studies are needed to explore real-world performance, assess susceptibility to reward hacking, and validate outputs through human evaluations.

References:

🗞️ TRENDS

In 2024, OutSystems anticipates a transformative shift in technology. Gartner predicts a substantial increase in the adoption of AI coding assistants among enterprise software engineers. Developers are expected to leverage the synergy of low-code and generative AI, accelerating application development. Customer-facing roles will witness rapid AI-driven improvements, enhancing productivity and personalized interactions. Cloud-native technology will be pivotal for cost-effective growth. Low-code platforms will break communication barriers, fostering collaboration between tech and business teams. Embracing these changes is essential for staying competitive in the evolving tech landscape.

  • By 2028, 75% of enterprise software engineers will use AI coding assistants, a significant increase from less than 10% in early 2023 (Gartner prediction).

  • Generative AI combined with low-code development offers the largest productivity gains.

  • Developers in 2024 will leverage low-code and generative AI to build, test, and update applications rapidly.

  • Upskilling and reskilling programs are crucial for developers transitioning to AI tools.

  • Customer-facing roles will experience rapid AI-driven improvements, boosting productivity and personalization.

  • Cloud-native technology is essential for cost-effective growth.

  • Low-code development platforms simplify communication between tech and business teams, fostering collaboration.

🗞️ IMPACT (Economy, Workforce, Culture, Life)

Generative AI has evolved from hype to an essential tech component for businesses, particularly in code testing, project management, and application development. In 2024, it's predicted to dominate code testing workflows, with up to 80% automation. These tools enhance developer productivity, fostering agility and innovation. They also open doors to advanced software capabilities based on historical data. However, responsible adoption and addressing bias are essential. AI's threat to intellectual property and privacy is a code-red concern. Despite challenges, 2024 is poised to witness AI's integration into everyday applications, improving network connectivity, predictability, troubleshooting, and security.

🗞️ OPINION (Opinion, Analysis, Reviews, Ideas)

Meta AI recently introduced 'Prompt Engineering with Llama 2,' a resource for the open-source community, while Andrew Ng's DeepLearning.AI launched a course on the same topic. IBM, Amazon, Google, and Microsoft also offer prompt engineering courses for open-source models. Prompt engineering gained prominence in 2023, with experts in high demand to elicit desired responses from chatbots, particularly with OpenAI's ChatGPT. Open source LLMs like Meta's LLaMA and Mistral require a different approach to prompt engineering. Sharon Zhou, CEO of Lamini, emphasized the simplicity of prompt engineering, cautioning against overcomplication and stressing the importance of tailored prompts. Enterprises are turning to open source models for specific use cases, driving the need for precise prompting and information retrieval.

🗞️ LEARNING (Tools, Frameworks, Skills, Guides, Research)

Digital content creation benefits greatly from Generative AI, a tool that enhances the production of diverse materials such as articles, social media posts, images, and videos. While it doesn't always produce perfect results on the first try, a collaborative approach with smart prompting can significantly aid creators. However, effective use requires understanding the technology's capabilities and limitations. For instance, Generative AI is adept at drafting general content but may falter with specialized topics or nuanced emotions. Clear objectives for AI-generated content, ethical practices like transparency, quality control, continuous adaptation, and a balance of AI and human input are essential. These ten best practices help content teams effectively utilize Generative AI, ensuring quality, innovation, and maintaining a human touch in their work.

🗞️ BUSINESS (Use Cases, Industry spotlight)

Generative AI is revolutionizing smart manufacturing by optimizing processes, aiding troubleshooting, and boosting worker productivity. Its applications range from enhancing frontline worker training to improving manufacturing efficiency. Key to its impact is the ability to analyze data and generate real-time, tailored solutions, as evidenced by 82% of organizations in a Google study anticipating significant industry transformations due to GenAI.

Summary:

  1. Generative AI Impact: Transforms manufacturing through data-driven process optimization and worker support.

  2. Benefits: Increases production efficiency, improves problem-solving, and enhances worker productivity.

  3. Industry Perspective: 82% of organizations recognize its potential for significant industry change.

  4. Applications:

    • Customized training and support for frontline workers.

    • Real-time, context-aware operational guidance.

    • Manufacturing process improvements.

  5. Use Cases:

    • Enhanced troubleshooting and predictive maintenance.

    • Digital conversion of tribal knowledge.

    • Individualized worker training and skills gap analysis.

  6. Connected Worker Success: Combines GenAI with smart technologies for better efficiency and worker empowerment.

🗞️ LATEST FROM THE WEB

Can ChatGPT drive my car? The case for LLMs in autonomy: The evolution of AI has seen a significant shift towards larger, multi-modal models like Microsoft’s Florence 2 and OpenAI’s GPT-4V, which outperform smaller, task-specific models by handling a wide array of tasks across various domains. These advancements extend to autonomous driving, where current small models face challenges like zero-shot generalization, interpreting intent, and scaling. Large language models (LLMs) offer potential solutions, enabling more sophisticated reasoning and problem-solving. However, limitations like latency and hallucinations still exist, necessitating further improvements and innovative approaches like reinforcement learning with human feedback and hybrid-cloud architectures. Integrating LLMs in autonomous driving is key to achieving safety and scalability for mainstream adoption. Read more at infoworld.com.

Gen AI and quantum to design new drugs: Paris-based Aqemia completes €60m Series A: Paris-based biotech Aqemia, specializing in AI-driven drug discovery, has secured a €30m extension to its Series A funding, totaling €60m. Initially raising €30m in 2022 with key investors like Eurazeo and Bpifrance, the recent boost led by Wendel Growth includes previous backers. Aqemia's AI technology expedites the traditionally lengthy process of discovering and designing drug molecules. The company is advancing its drug discovery pipeline, with three cancer-targeting drugs currently in preclinical animal tests. Aqemia's method, combining generative AI and quantum-inspired algorithms, accelerates candidate generation and testing, outpacing the conventional 5-6 year timeframe. It's also working on clinical trials and pharmaceutical partnerships, like a $140m deal with Sanofi, while planning to expand its workforce significantly. Read more at sifted.eu.

Apple iOS 18 expected to be a big software update, iPhone could be getting AI features: In 2024, Apple is poised to make a significant leap into generative AI with iOS 18, potentially marking its most notable software update to date. Bloomberg's Mark Gurman hints at groundbreaking changes, focusing on an enhanced Siri powered by Apple's own language model, "Ajax GPT." This move reflects Apple's strategy of prioritizing quality over speed, avoiding pitfalls faced by other AI systems. Additionally, Apple plans to integrate AI into apps like Apple Music, Pages, Keynote, and its Xcode platform. Uniquely, Apple's AI will operate on its devices' neural engines, not cloud servers. iOS 18 might also introduce Rich Communication Services (RCS) for better messaging capabilities across platforms. The reveal is anticipated at the Worldwide Developers Conference in June, with a public release likely in September. Read more at techspot.com.

Sam Altman’s South Korea Visit Signals OpenAI’s Ambition to Develop Own AI Chips with Samsung and SK Hynix: OpenAI CEO Sam Altman visited South Korea to meet with Samsung Electronics and SK Hynix executives, signaling a potential strategic shift towards AI chip development. His visit, focused on addressing AI system chip shortages, involved exploring collaborations with these semiconductor giants, known for their High Bandwidth Memory technology crucial for AI chips. Altman toured Samsung's Pyeongtaek chip plant and held discussions about possible partnerships and investments in AI chip production. This move could lead to an 'AI Chip Alliance', attracting significant funding and shaping the semiconductor industry. Altman's actions reflect an interest in developing proprietary AI chips, possibly in cooperation with Samsung and SK Hynix, leveraging their technological expertise and market dominance. Read more at koreatechtoday.com.

General Assembly Launches Suite of Upskilling Programs to Prepare Businesses for an AI-Driven Economy: General Assembly has introduced a range of upskilling programs, aimed at equipping businesses for the AI-driven economy. These programs cater to both technical and non-technical employees in various industries, addressing the need for AI skills at all organizational levels. The initiative responds to research indicating over 80% of executives believe AI will reduce costs and boost revenue, despite a prevalent concern about the shortage of skilled AI talent. The courses range from AI for Leaders, offering C-suite executives foundational AI knowledge, to Applied AI Engineering for software engineers and data scientists. This effort aligns with the increasing pace of AI integration in businesses, focusing on practical application and future-ready skills development. Read more at datanami.com.

Join my community by subscribing to my newsletter below:

Reply

or to participate.