Weekly AI and NLP News — September 18th 2023

      No Comments on Weekly AI and NLP News — September 18th 2023

Stable Audio, Mixture-of-Experts LLMs, and LLMs for compiler optimization



Here are your weekly articles, guides, and news about NLP and AI chosen for you by NLPlanet!

😎 News From The Web

  • Stable Audio. London-based startup Stability AI, known for its AI model Stable Diffusion, has launched Stable Audio, an AI model that can generate high-quality commercial music with more control over synthesized audio.
  • Google nears release of AI software Gemini, The Information reports. Google is close to launching Gemini, an advanced language model that will rival GPT4. It is currently in the early testing phase and offers a range of functionalities including chatbots, text summarization, and code writing assistance.
  • Pulitzer Prize winner and others sue OpenAI. Pulitzer-winning novelist Michael Chabon and other writers are suing OpenAI for copyright infringement, claiming that the datasets used to train ChatGPT contain copyrighted content. OpenAI argues that its language learning models are protected by “fair use,” igniting discussions on AI and copyright law in the field.
  • Adobe’s Firefly generative AI models are now generally available. Adobe has released commercially available generative AI models in their Creative Cloud, including a standalone web app called Firefly. The new “generative credits” system controls user interactions with Firefly’s AI models, with each click on ‘generate’ using one credit.
  • Roblox’s new AI chatbot will help you build virtual worlds. Roblox’s 2023 Developers Conference introduced the Roblox Assistant, a new conversational AI tool designed to assist creators in developing more immersive virtual experiences. This tool enables creators to easily generate virtual environments and implement basic gameplay behaviors. However, it will not be accessible until the end of this year or early next year.

📚 Guides From The Web

  • LLM Training: RLHF and Its Alternatives. This content provides a guide on alternatives to Reinforcement Learning from Human Feedback (RLHF). It presents five different approaches with corresponding research papers. These alternatives include Constitutional AI, The Wisdom of Hindsight, Direct Preference Optimization, Reinforced Self- Training, and Scaling Reinforcement Learning from Human Feedback with AI Feedback.
  • New AI Usage Data Shows Who’s Using AI — and Uncovers a Population of ‘Super-Users’. Generative AI is experiencing steady growth in usage, with nearly half of the population utilizing it and a third using it daily. Younger generations, particularly Gen Z and Millennials, are the “super users” of generative AI, with 65% of them embracing the technology and trusting its decision- making guidance.
  • Overview of natively supported quantization schemes in 🤗 Transformers. Quantization schemes in Transformers like BitsandBytes and Auto-GPTQ offer ways to run large models on smaller devices. BitsandBytes is user-friendly and supports various models, while Auto-GPTQ excels in text generation speed but may result in lower quality. Both schemes can minimize performance degradation in larger models according to the Open-LLM leaderboard.
  • Validating Large Language Model Outputs. LLMs are powerful but can produce inconsistent results. Validating outputs is essential for reliable and accurate applications. Guardrails AI is a useful open-source package that improves LLM outputs by providing structural and quality assurances.
  • Create a Self-Moderated Commentary System with LangChain and OpenAI. This guide explains the process of creating a self-moderated commentary system using OpenAI and LangChain. It involves two models: one generates a response to user input, and the other analyzes and modifies the response before publishing it.

🔬 Interesting Papers and Repositories

  • IBM releases MoE LLMs. IBM has just released MoE LLMs, including models with 4B and 8B parameters. These models offer comparable computational efficiency to dense models with fewer parameters. They have been trained on a large dataset and utilize the ModuleFormer architecture.
  • Large Language Models for Compiler Optimization. Researchers have developed a powerful transformer model that optimizes LLVM assembly code for code size. The model outperforms baselines and demonstrates excellent code reasoning abilities, achieving a 3% reduction in instruction counts compared to compiler output. It generates compilable code 91% of the time and perfectly emulates the compiler’s output 70% of the time.
  • When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale. Researchers have found that perplexity is a more effective method than complex scoring techniques for pruning pretraining data for language models.
  • NExT-GPT: Any-to-Any Multimodal LLM. NExT-GPT is an any-to-any multimodal language model that can process and generate content in various modalities such as text, images, videos, and audio. It achieves this by utilizing already-trained encoders and decoders, with minimal parameter tuning required.
  • Microsoft releases Prompt Flow. Microsoft has introduced Prompt Flow, a development suite for LLM-based apps. It offers a range of functionalities including creating executable workflows, debugging and iterating flows, evaluating flow quality and performance with larger datasets, integrating testing and evaluation into CI/CD systems, and deploying flows to chosen serving platforms or app code bases easily.
  • From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting. A recent study introduces the “Chain of Density” (CoD) prompting technique that generates dense summaries using GPT-4. By iteratively adding important entities without increasing the length, the resulting abstract summaries outperformed standard prompt summaries in terms of abstractive quality and reduced lead bias.
  • Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts. LLMs have shown promising results in clinical text summarization tasks, surpassing human experts in terms of completeness and correctness. This research is the first to demonstrate LLMs outperforming humans in multiple clinical summarization tasks.
  • Large-Scale Automatic Audiobook Creation. New neural text-to-speech technology and automated parsing of e-books in the Project Gutenberg collection have resulted in the creation of over 5,000 open-license audiobooks, expanding the accessibility of this vast literature collection.

Thank you for reading! If you want to learn more about NLP, remember to follow NLPlanet. You can find us on LinkedInTwitterMedium, and our Discord server!

Leave a Reply

Your email address will not be published. Required fields are marked *