The AI & Data Science Roundup #6 - OpenAI SearchGPT, Meta Llama 3.1, AI Alzheimer's Prediction, and more!

The AI & Data Science Roundup #6 - OpenAI SearchGPT, Meta Llama 3.1, AI Alzheimer's Prediction, and more!

Welcome to this week's edition of "The AI and Data Science Roundup"! Here, we bring you a detailed look at the latest and most exciting developments in AI and data science. Let's dive into the top stories and updates that made waves this week.


AI News Highlights

First up, OpenAI has unveiled SearchGPT, an AI-powered search engine that promises to revolutionize the way we interact with search results. SearchGPT not only organizes and contextualizes search outcomes but also enhances the overall user experience. Currently, it's in the prototype phase with access limited to 10,000 test users. OpenAI has collaborated with renowned publishers like The Wall Street Journal and Vox Media to ensure proper content attribution and transparency. The ultimate goal is to integrate SearchGPT's features directly into ChatGPT, enabling real-time web content interaction.

Next, Meta has released Llama 3.1, emphasizing its commitment to open-source AI. This new iteration includes models with 405 billion, 70 billion, and 8 billion parameters, offering superior cost and performance metrics. Meta is partnering with industry giants like Amazon, Databricks, and NVIDIA to provide developers with robust tools for fine-tuning and distilling models. By making these models open source, Meta aims to empower organizations to tailor AI to their specific needs, ensuring data security and fostering innovation. This move underscores Meta's strategic vision of democratizing AI to enhance productivity, creativity, and overall quality of life.

In a groundbreaking development, researchers at the University of Cambridge have created an AI tool that predicts the progression of Alzheimer's disease with remarkable accuracy. Utilizing cognitive tests and MRI scan data from over 1,900 participants in the U.S., U.K., and Singapore, this tool has demonstrated 82% accuracy in identifying those who will develop Alzheimer's and 81% accuracy for those who will not. This innovation promises to significantly improve early diagnosis, patient care, and reduce unnecessary diagnostic tests.

Finally, Google's new AI-integrated weather model, NeuralGCM, is setting new benchmarks in forecasting accuracy. Unlike traditional models that require supercomputers, NeuralGCM can efficiently run on a laptop, making it both powerful and accessible. It combines machine learning with physics-based models, offering superior accuracy in short-term weather forecasts and long-term climate projections. While AI models like NeuralGCM have not yet been fully adopted by public forecasting agencies, their potential is undeniable and they represent a significant step forward in meteorological science.


Trending Kaggle Competitions

The ISIC 2024 - Skin Cancer Detection with 3D-TBP competition is one of the most exciting challenges currently underway on Kaggle. This competition is focused on developing image-based algorithms to identify skin cancer from 3D total body photographs, aiming to enhance early detection and triage, especially in settings where specialized care isn't readily available.

Participants will be evaluated based on the partial area under the ROC curve (pAUC) above an 80% true positive rate (TPR). The total prize pool for this competition is $80,000, with $65,000 allocated for the top 5 teams. Additionally, there are $15,000 in secondary prizes up for grabs for those who excel in retrieval sensitivity and model efficiency. The competition kicked off on June 26, 2024, and will run until September 6, 2024. If you’re up for the challenge, head over to Kaggle and join the race to improve early skin cancer detection!


Trending Open Source Projects

This week, we're spotlighting Code Llama, a family of large language models specifically designed for coding tasks. Code Llama is built on the robust Llama 2 architecture and offers state-of-the-art performance among open models. It boasts infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.

Code Llama comes in multiple flavors to cater to a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct). Each of these variants is available in 7 billion, 13 billion, and 34 billion parameter versions. These models are trained on sequences of 16,000 tokens and exhibit improvements on inputs with up to 100,000 tokens. Notably, the 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content, making them particularly versatile for code-related tasks.

Code Llama was developed by fine-tuning Llama 2 with a higher sampling of code, ensuring optimal performance for coding tasks. Significant safety mitigations have been applied to the fine-tuned versions of the model to promote responsible AI usage. For more detailed information on model training, architecture, parameters, evaluations, and safety measures, you can refer to their comprehensive research paper.

This release includes model weights and starting code for both pretrained and fine-tuned Llama language models, ranging from 7 billion to 34 billion parameters. Code Llama is now accessible to individuals, creators, researchers, and businesses of all sizes, enabling them to experiment, innovate, and scale their ideas responsibly. If you're interested in exploring the power of large language models for code, the Code Llama repository provides a minimal example to load the models and run inference.


Top Research

An exciting new paper titled "CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning" has made significant contributions to the field. With the rise of Transformers and Vision-Language Models (VLMs) like CLIP, leveraging large pre-trained models has become a go-to strategy to boost performance in Continual Learning scenarios. These models have given birth to various prompting strategies aimed at fine-tuning transformer-based models effectively, without falling into the trap of catastrophic forgetting. However, these methods often face challenges when it comes to specializing the model for domains that significantly deviate from the pre-training data while maintaining its zero-shot capabilities.

In this work, the authors propose Continual Generative training for Incremental prompt-Learning—a novel approach designed to mitigate forgetting while adapting a Vision-Language Model. This method leverages generative replay to align prompts with specific tasks, helping to preserve and enhance the model's zero-shot capabilities. Additionally, the researchers introduce a new metric specifically for evaluating zero-shot capabilities within Continual Learning benchmarks.

Through extensive experiments across various domains, the study demonstrates the effectiveness of this framework in adapting to new tasks while simultaneously improving zero-shot performance. Further analysis suggests that this approach can bridge the gap with joint prompt tuning, offering a robust solution for incremental learning scenarios.

For those interested in the technical details and experimental results, you can find the full paper through the provided link. This research presents a significant advancement in the field of Continual Learning and Vision-Language Models, showcasing the potential of generative replay techniques to enhance model adaptability and performance.


Startup Shoutout

This week, our spotlight is on Cohere, a Toronto-based startup that has recently raised an impressive $500 million in Series D funding. Cohere provides access to advanced Large Language Models and NLP tools through an easy-to-use API, making it a standout in the industry.

Cohere's models are specifically designed for enterprise generative AI, search, and advanced retrieval, offering powerful solutions for a variety of business needs. These models are trained on business language, ensuring they deliver accurate and efficient results tailored to enterprise applications.

One of the key features of Cohere's offering is the integration of Rerank and Embed models for reliable retrieval-augmented generation (RAG). This combination enhances the capabilities of their models, making them highly effective for tasks that require sophisticated information retrieval and generation.

In addition to their powerful models, Cohere provides secure and flexible deployment options, allowing enterprises to integrate AI solutions seamlessly with their existing data infrastructure. This flexibility ensures that businesses can adopt and scale AI technologies in a way that best suits their unique requirements.

Cohere's recent funding round is a testament to their innovative approach and the significant impact they're making in the enterprise AI space. We're excited to see how they continue to advance the field and drive new possibilities for businesses around the world.


Library of the Week

This week's spotlight is on MLflow, an open-source platform designed to manage the end-to-end machine learning lifecycle, encompassing experimentation, reproducibility, and deployment.

Here are some of the key features and aspects of MLflow:

Experiment Tracking: MLflow enables you to log and track experiments by capturing parameters, metrics, and artifacts. This makes it easy to reproduce and compare different runs, ensuring transparency and accountability in your experimentation process.

Model Management: With MLflow, you can register, manage, and deploy models from a central repository. This functionality supports versioning, ensuring that the models used in production are well-documented and traceable, which is crucial for maintaining robust and reliable machine learning applications.

MLflow Projects: These provide a standardized way to package and share machine learning code. A project is simply a directory or Git repository with a defined structure, including a specification file (MLproject) that describes the dependencies and entry points for the code, making collaboration and code sharing seamless.

MLflow Models: This component allows you to deploy machine learning models in diverse environments, whether it's on-premises or in the cloud. MLflow supports multiple formats like TensorFlow, PyTorch, and Scikit-learn, enabling seamless integration with various tools and frameworks.

MLflow Tracking Server: The server can be deployed to manage and store experiment metadata centrally. This is crucial for collaboration, as it allows multiple users to log and query experiments in a unified manner, facilitating teamwork and consistency across projects.

Integration with Other Tools: MLflow integrates well with popular machine learning libraries and tools, such as Apache Spark, TensorFlow, and Scikit-learn. It also supports deployment tools like Docker and Kubernetes, making it flexible and adaptable to various workflows.

MLflow provides a robust framework for managing the complex workflows associated with machine learning projects, from experimentation to production deployment. It's widely adopted in both academia and industry, reflecting its versatility and effectiveness in enhancing the productivity and reproducibility of machine learning endeavors.

That's all for this week's roundup! We hope you enjoyed our deep dive into the latest news, competitions, open source projects, top research, innovative startups, and essential tools in the AI and data science community

Alister George Luiz

Alister George Luiz

Data Scientist
Dubai, UAE