In the next three years, society will generate more data than in the last 30 years combined. However, our ability to handle this increasing volume of information isn't keeping pace. Through our previous work as computational journalists at the Associated Press and the Wall Street Journal, we've seen firsthand how this impacts the news-gathering process: sorting relevant data is time-consuming and often leads to outdated news by the time it's published. In response to this, we've embarked on building a more efficient system to filter various data sources and deliver accurate reports in real time — a self-gathering news system. As the information overload is set to triple in the next five years, we believe artificial intelligence, particularly language models, can become an important tool to achieve faster, more efficient news gathering, production, and distribution. However, there are important issues that need to be addressed first.
Although we believe in AI’s potential to enhance and streamline workflows in certain coverage areas, large language models face two major constraints when applied to journalism.
To address these limitations, we are developing a new approach that augments open-source large language models (LLMs) with real-time information and contextual understanding about specific subject areas. Our initial focus is on an industry where traditional methods of data collection and interpretation have been woefully inadequate — biotech.
Enter AXL-1, our experimental language model that turns structured data into concise news digests, upholding editorial integrity at computational speed.
Life sciences has historically struggled to keep pace with information growth, especially in the area of drug development. In the United States alone, every day, there are nearly 7,000 clinical trial updates. This vast amount of information means that biotech professionals tasked with developing new treatments spend hours manually tracking dozens of sources to stay on top of critical developments in their therapeutic areas.
Our journey began with our inaugural product, STAT Trials Pulse. This tool uses a combination of editorial algorithms and machine learning to extract signals from clinical trial registries, scientific papers, and press releases. Our platform spans coverage across 22,000 organizations, 26,000 interventions, and 5,800 diseases, surfacing crucial clinical trial events and risk signals across more than 90 categories. These categories range from study delays to shifts in patient populations, unexpected interruptions, and more, thereby generating millions of unique data points related to clinical trial events. Now, we are starting to expand those capabilities through AXL-1, which transforms these continuous flows of data into news briefs. Focusing initially on drug development data, AXL-1 will first serve the biotech industry.
AXL-1’s differentiation lies in its computational journalistic methodology, which integrates editorial workflows into model training — to our knowledge, the first of its kind. However, tuning a model for news generation involves more than merely replicating tone or style; it's about instilling journalistic reasoning within the language model. This distinctive method requires a continuous and iterative feedback loop between the AI and human editors, who provide model feedback on data relevancy, context selection, and accuracy.
Our "Data2News" system employs a learning approach where it reviews machine-generated content that has been edited by journalists. The system also considers associated metadata from 25 different editorial categories developed by our computational journalism team. These include aspects such as "redundancy", "order of information", "tone", "factual accuracy", among others. We have applied this methodology to two models 7 billion parameter models, fine-tuning them on a diverse range of data including structured clinical trials information, textual news summaries and journalistic interpretations. We have managed to optimize the model by introducing and refining an additional 3 million parameters that we developed. A crucial aspect of our methodology is a human-in-the-loop approach. This involves editors providing targeted feedback. As a result, we've been able to significantly enhance the accuracy of our pre-trained models.
Given the rapid pace of R&D innovation in language models, it isn't desirable to rely on a single model. This is because new developments that shift the paradigm are happening every week. At the same time, training a model from scratch demands a substantial financial investment. The evolving nature of news requires an architecture that can quickly adapt and learn new concepts, especially in specialized fields like life sciences. To address this, we are building a dynamic AI infrastructure that lets us swap models as they become available. This allows us to regularly iterate and optimize our human-in-the-loop systems, and integrate the latest developments in open-source language models in our pursuit of high-precision news generation. We believe that open-source models present opportunities to foster open innovation, inviting greater scrutiny and trust. Over time, this can help mitigate biases and limitations, consequently enhancing model performance.
The approach we are developing enables us to swiftly implement a new LLM and fine-tune high-performance open-source models, such as LLaMA2 and Falcon (which hold the top ranks on Hugging Face's Open LLM Leaderboard). This optimization focuses exclusively on high-accuracy journalistic tasks, negating the need for costly computation involved in training from scratch. This strategy provides us with greater control over our data, expenses, and the model's response.
As AI becomes integrated into certain areas of news production where structured data can be sourced in real-time, it is important to note that it is not intended to replace human journalists. Instead, it augments their work and opens doors to new types of reporting that can transform torrents of data, like those in clinical trials, into digestible, insightful news stories. As smart machines enter the newsroom, the role of journalists is also witnessing a transformation, transitioning from traditional reporting to guiding AI on data relevancy, context selection, and accuracy. This shift not only paints an intriguing future for the journalism landscape, but also raises important ethical questions concerning transparency, authorship, quality control, and potential manipulation of AI algorithms.
In an era marked by the explosion of AI-generated information, honing human-exclusive critical thinking and ethical standards is more than just an advantage—it's an imperative for ensuring that technology serves us, not the other way around. We believe that journalists, instead of being mere spectators, have a foundational role in holding these systems accountable and driving them towards greater ethical conduct and unbiased performance.
Currently, AXL-1 specializes in biotech reporting. However, our goal is to expand the scope of this approach to various fields where we can apply event and pattern detection. We believe that the future of generative information will not be built on a single large language model, but rather on a network of domain-specific, fine-tuned language models, such as AXL-1. In this new reality, human journalists also become the orchestrators of specialized AI agents, seamlessly integrating real-time data into meaningful narratives, illuminating the 'data heartbeats' of our world, and helping us make informed decisions. At its core, this rethinking of the journalistic process centers on the fusion of human expertise with the capabilities of AI.
Our interest spans beyond technological advancements in the news and information industry; we are equally invested in its commercial implications. Amid the rapid expansion of the generative AI market, projected to reach nearly $120 billion over the next decade according to Precedence Research, AppliedXL presents a blueprint for news companies to leverage their own data to train and commercialize open source models. This signals a significant step for the industry to extract value from the AI boom. We see AXL-1 as a business catalyst, including the integration of generative AI capabilities into our core data platform. This provides our customers with the ability to read summaries and generate comprehensive landscape reports. Additionally, it paves the way for creating innovative news products in collaboration with other newsrooms.
While we are excited about the initial accuracy levels of our generative AI output, we aren't ready to release it publicly just yet. We're currently refining AXL-1's capabilities in collaboration with journalists and technologists, as we want to ensure it's error-free and ready before a public release. If you're a journalist or a biotech professional, we invite you to apply to our private beta.