Weekly AI and NLP News — September 11th 2023

      No Comments on Weekly AI and NLP News — September 11th 2023

The 100 most influential people in AI, Falcon 180B, and an open-source Code Interpreter



😎 News From The Web

  • The 100 Most Influential People in AI 2023. TIME magazine has released its list of the 100 Most Influential People in AI for 2023. The list includes notable figures such as Dario and Daniela Amodei, Sam Altman, Demis Hassabis, Robin Li, Clément Delangue, Lila Ibrahim, Elon Musk, Geoffrey Hinton, Fei-Fei Li, Timnit Gebru, Yann LeCun, and Yoshua Bengio.
  • Spread Your Wings: Falcon 180B is here. TII has just released Falcon 180B, a powerful language model with 180 billion parameters trained on 3.5 trillion tokens. Outperforming Llama 2 70B and GPT-3.5 on MMLU, Falcon 180B performs great and ranks high on the Hugging Face Leaderboard. This model is available for commercial use but has strict terms excluding “hosting use.”
  • Releasing Persimmon-8B. Adept.ai introduces Persimmon-8B, an open-source LLM with impressive performance and a compact size. Trained on less data, it achieves comparable results to LLaMA2 and offers fast C++ implementation combined with flexible Python inference.
  • Training Cluster as a service: Train your LLM at scale on our infrastructure. Hugging Face provides cost estimates for training large language models (LLMs) of different sizes and token counts. The estimates range from $65k to $14.66M, depending on the model parameters and token count.
  • Open ASR Leaderboard. Hugging Face has released a speech-to-text leaderboard that ranks and assesses speech recognition models on their platform. The current top performers are NVIDIA FastConformer and OpenAI Whisper, with a focus on English speech recognition. Multilingual evaluation will be added in future updates.

📚 Guides From The Web

  • Create a Self-Moderated Commentary System with LangChain and OpenAI. This guide explains the steps to build a self-moderated comment response system using OpenAI and LangChain. It involves two models, where the first generates a response and the second modifies and publishes it.
  • LLMs Are Not All You Need. LLMs have limitations such as generating false information and lack of up-to-date content. To harness their full potential, a well-designed ecosystem is necessary. This involves prompt engineering, utilizing techniques like quantization, retrieval augmented generation (RAG), and conversational memory.
  • GPTQ Quantization on a Llama 2 7B Fine-Tuned Model With HuggingFace. HuggingFace has introduced GPTQ quantization, enabling the compression of large language models to 2, 3, or 4 bits. This method outperforms previous techniques, maintaining accuracy while significantly reducing model size.
  • Evaluation & Hallucination Detection for Abstractive Summaries. Abstractive summarization faces challenges in evaluating hallucinations, with difficulties in measuring relevance and consistency objectively. Evaluating summaries using metrics like ROUGE and BERTScore has limitations, especially outside of reference distributions. Detecting inconsistencies between summary and source document is crucial, with advancements in entailment-based and QA metrics.
  • Are self-driving cars already safer than human drivers? Early data suggests that self-driving cars, such as Waymo and Cruise driverless taxis, may be safer than human drivers. Despite experiencing 102 crashes within 6 million miles, most incidents were low-speed collisions caused by other drivers.

🔬 Interesting Papers and Repositories

  • KillianLucas/open-interpreter: OpenAI’s Code Interpreter in your terminal, running locally. Open Interpreter is an open-source implementation of OpenAI’s Code Interpreter that provides a natural language interface similar to ChatGPT. It enables running various code types locally, offering interactive terminal chats for controlling computer functions without internet access limitations.
  • Large Language Models as Optimizers. LLMs can be used as optimizers in applications where gradients are not available. Optimization by PROmpting (OPRO) involves the LLM generating new solutions from a prompt, which are then evaluated and used to refine the prompt in a constant optimization cycle. OPRO has shown promising results, outperforming human-designed prompts in prompt optimization tasks.
  • SLiMe: Segment Like Me. SLiMe, a novel approach that combines vision-language models and Stable Diffusion (SD), allows image segmentation at custom granularity using just one annotated sample. It outperforms existing one-shot and few-shot image segmentation methods, as demonstrated in comprehensive experiments. 🖼️
  • FLM-101B: An Open LLM and How to Train It with $100K Budget. The authors present a growth-oriented strategy to train a cost-effective LLM model with 101B parameters and 0.31TB tokens for just $100K. They also introduce a new evaluation method focused on IQ-level analysis, showcasing the model’s performance on par with top models like GPT-3 and GLM-130B in IQ benchmark evaluations.
  • One Wide Feedforward is All You Need. Researchers have found that the Feed Forward Network (FFN) in Transformers can be optimized, resulting in a 40% reduction in model size while maintaining similar performance. By sharing a FFN across the encoder and removing it from the decoder layers, parameters can be decreased with minimal decrease in accuracy.
  • Efficient RLHF: Reducing the Memory Usage of PPO. The authors present Hydra-PPO, a method to accelerate Reinforcement Learning from Human Feedback (RLHF) by reducing memory usage. Hydra-PPO reduces the number of models in memory during the PPO stage, allowing for increased training batch size and decreased per-sample latency by up to 65%.
  • PromptTTS 2: Describing and Generating Voices with Text Prompt. PromptTTS 2 is a Text-to-Speech system that can control attributes like Gender, Speed, Pitch, and Volume using text prompts. It can also match synthesized voices to facial images while maintaining timbre.

Thank you for reading! If you want to learn more about NLP, remember to follow NLPlanet. You can find us on LinkedInTwitterMedium, and our Discord server!

Leave a Reply

Your email address will not be published. Required fields are marked *