Developers will use 15 years of weather data from Australia to develop a model for predicting wildfires during a competition held during the event.
Developers and business leaders can learn about the latest trends in artificial intelligence (AI) at IBM’s free Data & AI digital conference on Nov. 10 starting at 2 pm GMT. The sessions will focus on operations, ethics, and cloud computing. IBM is running the conference again on Nov. 24 for India and the Asia Pacific region.
The conference has four tracks:
- Track 1: AI in production
- Track 2: Five hands-on labs
- Track 3: Data and AI essentials course
- Track 4: Data competitions and open source
SEE: Multicloud: A cheat sheet (TechRepublic)
People who register for the conference get $300 in credits to spend on any class in the IBM Cloud Catalog on Coursera. Participants can attend several workshops during the conference to earn a badge or work toward a certification, including:
Most of the sessions will be pre-recorded and available as soon as the conference opens. Speakers will be available in a Twitch stream and the Slack channel that are both part of the conference. IBM developer advocates Spencer Krum, JJ Ashgar, and Matt Hamilton will host a watch party during the conference. Speakers and other developers will stop by to answer questions and do some live coding.
Key sessions for business leaders and developers
Todd Moore, IBM’s vice president of open source, recommended the panel session that features Ibrahim Haddad, the VP at the Linux Foundation and executive director of the Linux Foundation AI & Data Foundation: “What’s next in Open Source Data Science and AI?” This session in the Data Competitions & Open Source track will cover the impact of open source and contests on the evolution of software as well as how patents influence the ecosystem.
Peter Wang, the CEO of Anaconda, will speak at the conference and explain how his company has incorporated IBM’s trusted AI tools into workflows. Wang’s session is “Open data science meets the enterprise: Challenges and Opportunities” and is in the AI in Production track.
Moore said that this session will be relevant for business leaders as much as tech executives.
“He will be telling the story of how to integrate in ethical AI along with the things that you are already doing,” Moore said.
Also during the event, a team of coders and data scientists will work on models to forecast potential wildfires in Australia in advance of the 2021 fire season. The project will be informed by data from the IBM Weather Operations Center Geospatial Analytics Center from 2005 to the present.
Investing in AI operations
Moore said that the ongoing operational side of machine learning (ML) projects is just as important as building the initial algorithm. This includes monitoring the output of the algorithm, collecting fresh training data, and refining the algorithm.
“The crux of the matter is that we all need to invest heavily in machine learning operations and the automation lifecycle that goes with it,” he said.
Moore described another trend in the open source AI world: Cloud platforms and machine learning are becoming more and more interdependent. He said containers specifically are key to operational success with ML models.
“Data scientists can get the latest and greatest data for the models, and the operations person has an easy way to deploy these models,” he said. “That’s creating the portability also.”
Another timely conference session is one that covers the influence of code on data and vice versa. Ruchir Puri, chief scientist at IBM Research, will discuss this topic in the Track 1 session “Software is eating the World – can AI eat software?”
“The question is how will coding change to accommodate the data sets and vice versa,” Moore said. “Which one is driving the ship?”
Setting standards for ethical AI
Moore’s speciality is trustworthy AI. IBM’s AI Ethics Global leader Francesca Rossi will discuss this topic at the conference. Moore said that companies should follow these principles for building trustworthy AI models:
Moore said IBM is setting standards and working with industry partners to make this approach the default for building AI models. To support this goal, IBM donated three code bases to the Linux Foundation that companies can use to remove bias from algorithms and boost security:
- AI Fairness 360: This extensible open source tool kit helps developers understand bias in ML models.
- AI Explainability 360: This tool kit helps developers predict the result of an algorithm and make it easier to understand how the models arrive at the decisions they make.
- Adversarial Robustness Toolbox: ART gives developers the ability to evaluate, defend, certify, and verify their ML models against security threats including evasion, poisoning, extraction, and inference.
Moore believes that the industry needs to police itself and not rely on regulators to set the guardrails.
“If researchers are going to help the world with these models, they need to embrace this framework,” Moore said.