Article

Applying computational journalism standards for higher quality information

At Applied XL, we combine cutting-edge data science with expert networks to provide objective, fast and reliable information that helps leaders across industries make critical decisions. We insist on the highest standards of computational journalism to ensure the integrity and ethical behavior of the smart systems we develop.

The complexity of algorithmic calculations means that it can be very challenging to ascertain exactly how a specific result was reached. This is particularly important when such results inform decisions that impact the health of people, places and planet.

As we scale production of information through algorithms calibrated by human experts, we at AppliedXL are committed to using these technologies responsibly and maintaining high algorithmic standards  to ensure higher quality information.

At AppliedXL, we believe in:

Transparency: Algorithms must be built around transparency so that people at all points of its production and consumption can play a role in keeping the algorithm in check. A transparent mode of operation includes:

Disclosing the data sources used to train an algorithm. This helps prevent faulty or illegal data from being unknowingly used, and will create algorithms that users can better trust.

Creating transparent goals. Creating transparent company goals is key to keeping algorithms aligned with our mission inside and outside of the organization. Doing so also empowers employees to speak up if something isn’t right or if a technology is violating our stated mission.

Providing transparent explanations wherever possible. Even in situations where we would not want to fully disclose how some of our technology works, we will aim to explain how it operates at a higher level. Algorithms should not be a black box. Even if our code is not made available, it should be clear how we are approaching and thinking about a problem.

Disclosing test results. We will make an effort to disclose the test results of any algorithmic technologies we use — for example, their accuracy and precision rates. Therefore, when you use one of our algorithms or read one of our reports, you will know ahead of time how accurate we expect the results to be.

Third-party audits. Allowing third parties, such as university researchers or non-profit organizations, to experiment with or audit algorithms ensures a non-partial, outside check on an algorithm’s development.

Privacy: We are dedicated to building a company culture that prioritizes ensuring data privacy:

Accountability. In any situation where we collect data from people, we will be transparent about what data we are collecting and how that data will be used.

Due Diligence. In any situation where we use data from a third party, we will conduct due diligence into where their data came from to ensure that it was collected fairly and legally.

Data Breaches. We will take steps to prevent data breaches and ensure data security. These steps will include regular data security audits, the results of which will be made available publicly.

Right to Request your own data. We will work to provide options for users to access data about themselves that we have on hand. This may also include allowing users to file a request to see which algorithms their data helped to train or which research reports their data fed into.

Responsibility: Algorithms must be built in a socially responsible way, with an awareness of the systemic discrimination that exists within societies. We will build algorithms to explicitly counter bias, rather than assuming neutrality — to ensure that our algorithms don’t magnify pre-existing biases.

Fair data sources. We will evaluate the data sources we use for fairness, diversity and equality. Many existing data sources contain racial and gender bias, because they are a reflection of a society in which these biases exist. When using data, we will remain aware of this fact and work to compile datasets that are built on principles of fairness or counteract existing biases.

Bias evaluation. We will work to explicitly evaluate our algorithms for biases. We are aware that technologies can inadvertently cause biases in ways its builders did not intend. Therefore, we will have a culture of bias auditing to ensure fairness in our results. To this end, we also intend to build algorithms that are clear to understand and easy to evaluate — our algorithms will be transparent and our technology will have built-in checkpoints to check for bias.

Understanding social context. Technology does not exist in a vacuum but within the context of a highly complex and diverse society. When approaching projects, we seek to bring not only technical knowledge but a nuanced understanding of the setting, history and context in which a piece of technology will be deployed. This will help us to have foresight into how to build algorithms that won’t cause discriminatory policies or worsen pre-existing problems. Ultimately, we build algorithms that exist in the world, not in a black box.

Feedback and accountability. We aim to be responsive and accountable to our users. To this end, we will provide feedback mechanisms through which people affected by our technologies can provide us with feedback and criticisms.

Are you a journalist?