Transcript: What Are Algorithms Good For? | Jul 29, 2021

An animated slate reads "The Agenda in the Summer."

A female announcer says THE AGENDA IN THE SUMMER WITH NAM KIWANUKA IS MADE POSSIBLE THROUGH GENEROUS PHILANTHROPIC CONTRIBUTIONS FROM VIEWERS LIKE YOU. THANK YOU FOR SUPPORTING TVO'S JOURNALISM.

Nam stands in the studio. She's in her early forties, with shoulder length straight brown hair. She's wearing glasses, a blue blazer over a black shirt, and a golden pendant necklace.

A wall screen behind her reads "The Agenda in the Summer."

Nam says MORE THAN EVER BEFORE, ALGORITHMS SHAPE OUR DIGITAL INTERACTIONS, FROM THE GOOGLE RESULTS WE SEE, TO WHAT SHOWS UP IN OUR SOCIAL MEDIA FEEDS, AND WAY BEYOND. I'M NAM KIWANUKA. THAT'S ALL MOSTLY POWERED BY ARTIFICIAL INTELLIGENCE AND WHAT'S CALLED DEEP LEARNING. AS WE REFLECT ON PAST CONVERSATIONS HELPING US TO UNDERSTAND THE DIGITAL REVOLUTION WE'RE LIVING AMIDST, TONIGHT WE'LL LOOK BACK AT THE POWER OF THINKING COMPUTERS.

Music plays as an animated slate reads "The Agenda in the Summer."

Nam says IN 2014, THE LATE STEPHEN HAWKING SAID, "SUCCESS IN CREATING ARTIFICIAL INTELLIGENCE WOULD BE THE BIGGEST EVENT IN HUMAN HISTORY. UNFORTUNATELY, IT MIGHT ALSO BE THE LAST." IN 2018, ELON MUSK SAID, "I THINK THE DANGER OF AI IS MUCH GREATER THAN THE DANGER OF NUCLEAR WARHEADS." AND IN THAT SAME YEAR, AN ENVIRONICS POLL SHOWED THAT 77 percent OF CANADIANS WERE CONCERNED THAT AI IS ADVANCING TOO QUICKLY TO PROPERLY UNDERSTAND ITS POTENTIAL RISKS. BACK IN 2015, WE HEARD HOW WIDESPREAD ALGORITHMS POWERED BY AI ALREADY WERE. HERE IS OUR FIRST CONVERSATION TONIGHT. THEN, AS HE'S BEEN CALLED, THE GODFATHER OF DEEP LEARNING, GEOFFREY HINTON EXPLAINED WHERE AI WAS IN 2016 AND WHERE IT WAS HEADED.

An animated slate reads "The Agenda in the Summer."

A clip plays.

In the clip, a guest speaks in the studio.

A caption reads "Invasion of the algorithms. Making decisions."

Then, it changes to "Karen Levy. New York University."

Karen in her thirties. She has short brown hair, she wears a burgundy sweater, and a blue t-shirt with black birds and flowers printed on it.

She says IN MY OPINION, ALGORITHMS AND BIG DATA ARE SUCH WIDESPREAD PHENOMENA, THAT IN SOME CASES, THEY HELP US MAKE DECISIONS VERY EFFECTIVELY. IN PROBABLY MORE EFFECTIVE WAYS THEN WE COULD BEFORE, RIGHT? AND THEN, IN OTHER WAYS, BECAUSE THEY'VE BECOME SO PERVASIVE AND COMPLEX AND OPAQUE, SO IT'S HARD TO KNOW WHAT THEY'RE DOING AND IT'S HARD TO MAKE THEM ACCOUNTABLE OR KNOW IF THEY'RE RESULTING IN FAIR OUTCOMES.

The caption changes to "Originally aired March 19, 2015."

Karen continues THEY CAN BE PRETTY DIFFICULT TO REGULATE AND BECAUSE OF THEIR PERVASIVENESS, YOU KNOW, THEIR APPLICATION TO INSURANCE CONTEXT, TO CREDIT DECISIONS, TO EMPLOYMENT CONTEXT, YOU KNOW, I THINK WE HAVE SOME VERY BIG CHALLENGES AHEAD WHEN WE TRY AND DECIDE HOW WE WANT TO REGULATE THEM.

Steve sits in the studio. He's slim, clean-shaven, in his forties, with short curly brown hair. He's wearing a blue suit, light blue shirt, and blue tie.

Steve says WELL, LET ME JUST FOLLOW ON THAT BECAUSE YOU SAID, "TO MAKE THEM ACCOUNTABLE." WHO IS THE "THEM." THAT WE ARE TRYING TO MAKE ACCOUNTABLE HERE SO THEY DON'T TAKE OVER OUR LIVES IN A BAD WAY?

Karen says YEAH, THAT'S THE TOUGH QUESTION, RIGHT? I MEAN, TO SOME EXTENT ALGORITHMS JUST IMPLEMENT POLICY SO IN SOME WAYS, WE COULD SAY THEY'RE JUST A TOOL FOR, YOU KNOW, MAKING A DECISION. WE'VE ALWAYS USED TOOLS TO MAKE DECISIONS, BUT BECAUSE THEY CAN, YOU KNOW, LITERALLY USE MILLIONS OF DATA POINTS, SOMETIMES BILLIONS OF DATA POINTS TO MAKE DECISIONS. IT'S VERY HARD TO KNOW SORT OF WHERE ACCOUNTABILITY SHOULD LIE. IS IT WITH THE DESIGNER OF THE ALGORITHM? OR WITH, YOU KNOW, THE INSTITUTION OR POLICY BODY THAT'S DECIDED TO USE IT. IT'S VERY HARD TO SORT ATTRIBUTE, UM... I DON'T WANT TO USE THE WORD BLAME, BUT TO ATTRIBUTE RESPONSIBILITY FOR A RESPONSIBLE USE OF ALGORITHMS.

Steve says JONATHAN, CAN YOU PICK UP ON THAT? WHO? WHO ARE WE TRYING TO HOLD ACCOUNTABLE TO MAKE SURE THESE THINGS ARE NOT A NEFARIOUS INFLUENCE IN OUR LIVES?

The caption changes to "Jonathan Obar. University of Ontario Institute of Technology."

Jonathan is in his forties, clean-shaven, with brown and gray hair, and thin-rimmed glasses. He wears a black suit, a white shirt, and a dark blue tie.

He says I SHOULD MAKE CLEAR THAT I AM VERY CONCERNED ABOUT THE RISE OF BIG DATA AND HOW BIG DATA PRODUCTS ARE BECOMING MORE MAINSTREAM. I WAS AT A UNIVERSITY IN TORONTO. I WON'T GIVE ALL OF MY PERSONAL INFORMATION AWAY ON TELEVISION. GIVING A TALK THE OTHER DAY ABOUT HOW COMMON IT IS NOW FOR UNIVERSITIES TO OUTSOURCE THEIR e-mail TO MICROSOFT. THIS PARTICULAR UNIVERSITY DECIDED THAT THEY WERE GOING TO OUTSOURCE THE STUDENT EMAIL TO MICROSOFT, AND OF COURSE, THE STUDENTS, WHEN WE TALKED ABOUT THIS WERE CONCERNED, BUT THEY WEREN'T REALLY SURE WHY TO BE CONCERNED. I MENTIONED A STUDY THAT CAME OUT LAST YEAR FROM CARNEGIE MELLON UNIVERSITY THAT DESCRIBES HOW EMPLOYERS ARE INCREASINGLY USING BIG DATA PRODUCTS TO CIRCUMVENT THE LAW WHEN CONDUCTING INTERVIEWS.

Steve says HOW SO?

Jonathan says SO, FOR EXAMPLE, IF YOU WANT TO FIND OUT SOMEONE'S SEXUAL ORIENTATION, SOMEONE'S RELIGIOUS BACKGROUND, SOMEONE'S POLITICAL LEANINGS. THINGS THAT TYPICALLY YOU DON'T ASK IN AN INTERVIEW.

Steve says DON'T ASK? REALLY AREN'T ALLOWED TO ASK ACTUALLY.

Jonathan says RIGHT. BY LAW, IN SOME PLACES, IN MOST PLACES, I WOULD IMAGINE, YOU'RE NOT ALLOWED TO ASK, BUT IF YOU'RE AN EMPLOYER WHO WANTS TO KNOW THOSE THINGS. THIS STUDY AT CARNEGIE MELLON DEMONSTRATES THAT, THAT IS, YOU KNOW, INCREASINGLY SOMETHING THAT EMPLOYERS AND ANYBODY INTERESTED IN THE BIG DATA INDUSTRY POTENTIALLY COULD DO. SO, THEN I ASKED THE STUDENTS, DO YOU SPEAK ABOUT YOUR SEXUAL ORIENTATION IN YOUR PERSONAL PRIVATE EMAIL? AND DOES IT CONCERN YOU THAT PERHAPS, THE FACT THAT UNIVERSITIES ARE OUTSOURCING THAT e-mail TO MICROSOFT, AN AMERICAN COMPANY, THAT PERHAPS FIVE, 10 YEARS DOWN THE ROAD, IF SEXUAL ORIENTATION, SOMETHING YOU MENTIONED IN PRIVATE. "PRIVATE" IN YOUR EMAIL, YOU KNOW, IS EVENTUALLY USED IN A BIG DATA PRODUCT THAT THAT EMPLOYER IS USING. COULD THAT AFFECT YOU? SOMETHING THAT PEOPLE WOULDN'T EVEN THINK ABOUT. A CONNECTION THAT PEOPLE WOULDN'T EVEN MAKE, BECAUSE THEY THINK THEIR e-mail IS PRIVATE, SO THAT'S A BIG CONCERN.

Steve says SO, KAREN, JUST SO I UNDERSTAND THIS. AS LONG AS I NEVER SEND AN EMAIL, DON'T GO ON FACEBOOK, NEVER SEND A TWEET, STAY OFF INSTAGRAM, PAY CASH FOR EVERYTHING, NEVER HAVE A CREDIT CARD, I'M SAFE. IS THAT WHAT YOU'RE SAYING?

The caption changes to "Karen Levy. Data and Society Research Institute."

Karen says WELL, NO. SO, ACTUALLY, THAT'S A GREAT POINT, SO MY COLLEAGUE JANET VERTESI DID AN EXPERIMENT A COUPLE OF YEARS AGO OR LAST YEAR WHERE SHE WAS PREGNANT AND SHE WANTED TO SEE HOW LONG IT TOOK... SHE WANTED TO KEEP IT FROM THE INTERNET, RIGHT? SHE WANTED TO SEE HOW LONG IT TOOK BEFORE DATA BROKERS FIGURED OUT THAT SHE WAS PREGNANT. SO, THERE WAS A REALLY KIND OF WELL-PUBLICIZED EXAMPLE A FEW YEARS AGO WHERE TARGET STORES STARTED DOING A LOT OF DIRECT MARKETING TO WOMEN THAT IT ASSUMED WERE PREGNANT BASED ON THINGS THAT THEY WOULD BUY LIKE UNSCENTED LOTION, RIGHT? SO, WE KNEW AND PREGNANCY IS A REALLY SALIENT THING FOR MARKETERS TO KNOW ABOUT, BECAUSE PREGNANT WOMEN BUY A LOT OF THINGS. SO, MY COLLEAGUE WANTED TO SORT OF SEE HOW DO I AVOID BEING DETECTED? AND THE ANSWER WAS, IT WAS BASICALLY IMPOSSIBLE, BECAUSE EVEN WHEN SHE DID ALL THE THINGS YOU TALKED ABOUT. SHE USED TOR, LIKE A PROTECTED NETWORK. SHE USED CASH TO PAY FOR THINGS. EVEN THOSE ACTIVITIES OF TRYING TO OPT-OUT GENERATED SUSPICION, SO YOU KNOW, SHE WOULD GO AND PAY CASH FOR PURCHASES AND SHE WOULD BE TOLD, YOU KNOW, THIS MAKES YOU LOOK SUSPICIOUS. LIKE YOU'RE GOING TO GET FLAGGED FOR THIS. LIKE THIS PURCHASE IS... YOU KNOW, WE FEEL LIKE YOU'RE TRYING TO OPT-OUT AND THAT ITSELF CREATES SUSPICION. AND AS HARD AS IT IS FOR US TO SORT OF OPT-OUT OF DATA COLLECTION TECHNOLOGIES, IT'S THAT MUCH MORE DIFFICULT FOR PEOPLE WHO ARE MARGINALIZED SOCIO-ECONOMICALLY. SO, IF YOU LIVE IN A COMMUNITY THAT'S HEAVILY SURVEILLED, OR YOU GO TO A SCHOOL THAT'S HEAVILY POLICED, YOU KNOW, LIKE THERE IS NO CHANCE TO OPT-OUT, RIGHT? JUST LIVING ENTAILS A LOT OF DATA COLLECTION ABOUT YOU.

Steve says SO, WILLIAM, IT SOUNDS LIKE WE SHOULD JUST NOT BOTHER TRYING TO RAGE AGAINST THE MACHINE, JUST... THIS IS THE WORLD IN WHICH WE LIVE. IS THAT RIGHT?

The caption changes to "William Huggins. Rotman School of Management."

William is in his forties, clean-shaven, with brown short hair. He wears a gray suit, a white shirt, and a dark gray tie.

He says TO SOME EXTENT, THAT'S TRUE. A LOT OF PEOPLE LIKE TO THINK ABOUT DATA COLLECTION IN TERMS OF IT'S SOMETHING THAT'S BRAND-NEW THAT STARTED HAPPENING. 130-140 YEARS AGO, PEOPLE WERE COUNTING TRAIN CARS TO SEE HOW MUCH GOODS WERE BEING SHIPPED. YOU COULD LITERALLY POST SOMEONE OUTSIDE OF A RETAIL ORGANIZATION TO SEE HOW MANY PEOPLE WERE BUYING THINGS AND ROUGHLY HOW MUCH THEY WERE BUYING. IT'S NOT THAT WE COULDN'T COLLECT THIS DATA IN THE PAST, IT'S THAT IT WAS PROHIBITIVELY EXPENSIVE TO DO SO, BECAUSE YOU HAD TO SEND HUMAN AGENTS TO ACTUALLY DO THE WORK FOR YOU. NOW, THAT PEOPLE ARE CONDUCTING THEIR TRANSACTIONS ONLINE, THE COST OF TRACKING THAT DATA HAS FALLEN DRAMATICALLY, AND THAT'S WHAT'S OPENED UP ALL OF THESE TECHNOLOGIES FOR THE MOST PART. IT'S NOT THAT WE COULDN'T DO IT BEFORE, IT'S THAT WHEN, WELL, WE LACKED SOME OF THE PROCESSING POWER, BUT THE DATA WAS ALSO PROHIBITIVELY EXPENSIVE TO GET.

Steve says DO YOU IN YOUR PERSONAL DAILY LIFE TRY TO AVOID THE ALGORITHMS AS MUCH AS YOU CAN?

William says YOU CAN'T DO SO.

Steve says YOU CAN'T?

William says NOPE.

Steve says SO, DON'T BOTHER?

William says NOT GOING TO TRY.

(LAUGHING)

Steve says NOT GOING TO TRY. HERE'S FROM THE FINANCIAL TIMES. LET ME READ THIS EXCERPT. THIS IS FROM, OH, ABOUT A MONTH OR SO AGO.

A quote appears on screen, under the title "Risk of Algorithms." The quote reads "While U.S. law forbids the discrimination of borrowers based on factors such as gender or race, parsing publicly available information on social networks such as Facebook and Twitter has been shown capable of accurately predicting everything from users' political inclination to ethnicity and sexual orientation. Chief among critics' concerns is the ability to use new type of data and computerised algorithms to build 'proxies' that do not overtly discriminate on the basis of factors such as race or gender, but may use correlated information to build an in-depth profile of a particular customer." Below the quote, a caption reads "Tracy Alloway, Financial Times. February 4, 2015."

Steve says HOW CONCERNED ARE YOU ABOUT THAT?

Karen says I MEAN, IT'S OBVIOUSLY A SOURCE OF GREAT CONCERN, BUT I THINK WHAT'S IMPORTANT TO REMEMBER IS WHAT MARINA ALLUDED TO THAT ALGORITHMS DON'T DO ANYTHING ON THEIR OWN. RIGHT? ALGORITHMS REFLECT THE SOCIAL AND INSTITUTIONAL TRUTHS OF THE WORLD IN WHICH WE LIVE, SO YOU KNOW, IF THERE IS RACISM, ALGORITHMS WILL PICK UP, YOU KNOW, WHERE RACISM HAS BEEN PRESENT IN SOCIETY. OR IF THERE IS BIAS, IT'LL PICK THAT UP. SO, FOR EXAMPLE, YOU KNOW, IF YOU HAVE... SAY AN EMPLOYER IS TRYING TO FIGURE OUT WHO TO HIRE, RIGHT? AND MEN HAVE ALWAYS BEEN MORE SUCCESSFUL IN A CERTAIN CAREER FIELD THAN WOMEN, BECAUSE OF ALL KINDS OF SOURCES OF BIAS, THEN THE ALGORITHM IS LIKELY TO JUST REFLECT THAT, RIGHT? WELL, IT WILL JUST SAY, WELL, YOU SHOULD HIRE THESE MEN BECAUSE THEY'VE BEEN MORE SUCCESSFUL, RIGHT? SO, THERE ARE LOTS OF WAYS IN WHICH ALGORITHMS CAN BOTH MASK BIAS AND ALSO REFLECT THE BIAS THAT WE LIVE WITH EVERY DAY.

Steve says DO YOU THINK PEOPLE ARE MAKING ENOUGH EFFORT, MARINA, TO CHANGE THE CODING OR THE INPUTTING INTO ALGORITHMS SO THAT THE DISCRIMINATION THAT WE'VE JUST HEARD ABOUT IS LESS PRESENT?

The caption changes to "Marina Sokolova. University of Ottawa."

Marina is in her sixties, with short white hair. She's wearing dark-rimmed glasses, a black V-neck sweater and an off-white shirt.

She says UM, I DON'T THINK SO.

STEVE SAYS NO, WE'RE NOT, HEY?

Marina says UH, I, WELL, MY OPINION IS THAT... THIS ISSUE MIGHT NOT BE YET ADDRESSED BY COMMUNITY IN ARTIFICIAL INTELLIGENCE. AND THESE ARE, AGAIN, TWO SIDES OF THIS STORY. OKAY, ONE THING IS THAT PEOPLE WHO DESIGN INTERNET ALGORITHMS, IT'S ANOTHER SIDE OF THIS STORY... PEOPLE WHO PROVIDE DATA AND WHO WANT TO USE THE RESULTS OF THIS. SO, THIS POINT, I THINK, THIS POINT HAS TO BE ADDRESSED IN BOTH COMMUNITIES.

Steve says DO THESE TWO FACTIONS TALK TO EACH OTHER?

Marina says WELL... I WOULD SAY IT NEEDS MORE WORK ON TALKING TO EACH OTHER.

Steve says EVEN IF THEY KNOW THAT ALL OF THIS IS HAPPENING. IT DOESN'T SOUND LIKE THERE'S REALLY MUCH WE CAN DO ABOUT IT. IS THERE?

The caption changes to "Autonomous accountability."

William says UM, I'M NOT SURE THERE'S MUCH THAT THE INDIVIDUAL CAN REALLY DO OTHER THAN ATTEMPTING TO TRICK ALGORITHM, LIKE, WAS ALLUDED TO IF BY SORT OF SAYING MOVING TO CASH TRANSACTIONS, BUT EVEN THAT WILL FLAG YOU ANYWAYS. THE REAL ISSUE IS THAT IT'S REALLY, REALLY CHEAP TO HAVE A LOT OF EYES ON A LOT OF PEOPLE RIGHT NOW AND WITH A BILLION EYES STARING AT US, THEY CAN COLLECT ALL KINDS OF DATA AND YOU'RE PROBABLY NOT GOING TO BE ABLE TO HIDE FROM EVERY FORM OF DATA COLLECTION NO MATTER HOW VIGILANT YOU WANT TO BE BARING MOVING TO A SHACK IN THE YUKON.

The caption changes to "University of Ontario Institute of Technology." Then, it changes again to "Controlling the code."

Jonathan says I OFTEN ASK MY STUDENTS IS PRIVACY DEAD? AND THE STUDENTS ARE LIKE...

Steve says WHAT'S PRIVACY?

Jonathan says MAYBE, RIGHT? UM, AND THEN I SAY, IT'S UP TO YOU. PRIVACY IS A CIVIL LIBERTY AND IF WE WANT IT IN THIS COUNTRY, WE HAVE TO FIGHT FOR IT. SURE, TOP-DOWN INITIATIVES ARE A GOOD IDEA AS I'VE ALREADY MENTIONED, BUT I THINK BOTTOM-UP ONES ARE A GOOD IDEA TOO, SO THIS REQUIRES PEOPLE TO GET INVOLVED, TO PUSH THEIR ISPS TO BE MORE TRANSPARENT ABOUT WHERE THEY'RE ROUTING DATA, ABOUT WHO THEY'RE SHARING IT WITH. TO BE LIKE MAX SCHREMS FROM GERMANY, WHO REQUESTED HIS 1200 PAGES OF DATA FROM FACEBOOK TO GET A SENSE OF WHAT THEY'RE KEEPING AND WHAT THEY'RE NOT KEEPING. THEY'RE KEEPING EVERYTHING. AND THIS SORT OF THING, AND THIS WILL PUSH THE GOVERNMENT TO CREATE STRONGER LAWS, TO CREATE ENFORCEMENT MECHANISMS TO HOLD THESE COMPANIES TO ACCOUNT. BECAUSE AT THE MOMENT, IN MY OPINION, THE GOVERNMENT IS BARELY INVOLVED.

Steve says KAREN, IN OUR LAST HALF MINUTE HERE. LET ME GIVE IT TO YOU. WHAT'S YOUR VIEW ON THIS?

Karen says I MEAN, CLEARLY, THERE'S A ROLE FOR BOTTOM-UP AND TOP-DOWN MOVEMENT HERE. THERE'S A ROLE FOR INDUSTRY, THERE'S A ROLE FOR THE CONSUMER, AND THEN THERE'S A ROLE FOR REGULATORY BODIES. I THINK, IN THE END, YOU KNOW, YOU COULD UNDERSTAND BIG DATA AS BEING SORT OF THE NEXT SITE OF CIVIL RIGHTS STRUGGLE IN THAT, YOU KNOW, ITS OUTCOMES CAN HAVE, YOU KNOW, VERY SEVERE EFFECTS FOR PEOPLE WHO HAVE BEEN SUBJECT TO DISCRIMINATION IN THE PAST AND JUST AS WITH THOSE STRUGGLES, I THINK IT NEEDS TO BE SORT OF A MULTIPRONGED, MULTIFACETED APPROACH TO SOLVING SOME OF THESE ISSUES.

The clip ends and another clip plays.

A caption on screen reads "The code that runs our lives. Originally aired March 3, 2016."

Steve says GOOGLE KNOWS WHAT YOU WANT TO SEARCH BEFORE YOU FINISH TYPING, FACEBOOK CAN TAG YOU AUTOMATICALLY IN A PHOTOGRAPH. HECK! CARS CAN DRIVE THEMSELVES NOW. THAT'S NOT JUST COMPUTERS GETTING BETTER. THAT'S ARTIFICIAL INTELLIGENCE GETTING SMARTER. GEOFFREY HINTON'S THREE DECADES OF WORK ON DEEP MACHINE LEARNING HELPED MAKE IT HAPPEN AND HE JOINS US NOW ON WHERE AI IS TODAY AND WHERE IT'S HEADED. HE'S A PROFESSOR OF COMPUTER SCIENCE AT THE UNIVERSITY OF TORONTO AND A DISTINGUISHED RESEARCHER AT GOOGLE...

Geoffrey is in his late fifties, clean-shaven, with short straight gray hair. He's wearing a black sweater and a white shirt.

Steve continues IT'S GREAT TO HAVE YOU HERE AT TVO.

Geoffrey says IT'S GREAT TO BE HERE.

Steve says WHAT TO GIVE US JUST A BASIC DEFINITION OF DEEP LEARNING TO START WITH?

The caption changes to "Geoffrey Hinton. University of Toronto."

Geoffrey says SO, YOUR BRAIN HAS MORE THAN 10 BILLION NEURONS IN IT.

Steve says EVEN MINE?

Geoffrey says EVEN YOURS.

Steve says OKAY.

The caption changes to "Deep learning."

Geoffrey says AND THE WAY IT WORKS IS AT EACH MOMENT EACH NEURON HAS TO DECIDE WHETHER TO GO PING AND IT BASES THAT DECISION PINGS IT GETS FROM OTHER NEURONS AND IT WEIGHTS THOSE PINGS. SO, SOME PINGS IT TAKES A LOT OF NOTICE OF AND THESE PINGS TELL IT EITHER YOU SHOULD GO PING OR YOU SHOULDN'T GO PING, AND IT CHANGES THOSE WEIGHTS, SO BY CHANGING HOW MUCH IT LISTENS TO OTHER NEURONS, A NEURON CAN CHANGE HOW IT BEHAVES AND THAT'S HOW YOU LEARN EVERYTHING. SO, THAT JUST LEAVES ONE QUESTION WHICH IS WHAT'S THE PRINCIPLE FOR CHANGING HOW MUCH YOU LISTEN TO OTHER NEURONS? AND THAT'S CALLED A LEARNING ALGORITHM AND DEEP LEARNING IS A LEARNING ALGORITHM FOR CHANGING HOW MUCH ONE NEURON WILL RELY ON OTHER NEURONS TO DECIDE WHETHER TO GO PING.

Steve says DO I ASSUME THERE'S A SHALLOW LEARNING AS WELL?

Geoffrey says OH, YES. THE SHALLOW LEARNING, THAT'S WHAT THE OTHER PEOPLE DO AND THAT DOESN'T HAVE LOTS OF LAYERS OF NEURONS BETWEEN THE INPUT AND OUTPUT.

Steve says SO, WE'RE INTO DEEP LEARNING HERE. HOW DOES DEEP LEARNING MIMIC HOW HUMANS LEARN ABOUT THE WORLD?

The caption changes to "Geoffrey Hinton. Google."

Geoffrey says WELL, NOBODY REALLY KNOWS HOW IN THE REAL BRAIN YOU CHANGE THE STRENGTH OF THE CONNECTIONS THAT DETERMINE HOW MUCH ONE NEURON AFFECTS ANOTHER NEURON. BUT IN THE 1980S, PEOPLE CAME UP WITH A VERY EFFECTIVE ALGORITHM FOR DOING THAT AND IT'S MEANT TO BE A SIMPLIFIED MODEL OF THE BRAIN. NOBODY KNOWS IF THE BRAIN ACTUALLY WORKS LIKE THIS AND BACK IN THE '80S, PEOPLE WERE VERY SUSPICIOUS, BECAUSE THE ALGORITHM DIDN'T WORK THAT WELL, BUT AS COMPUTERS GOT FASTER AND WE GOT BIGGER DATA SETS, THIS ALGORITHM NOW WORKS REALLY WELL. IT'S USED ALL OVER THE PLACE. IT'S USED IN YOUR CELL PHONE. UM, AND SO NOW IT SEEMED LIKE A BETTER BET FOR WHAT THE BRAIN MIGHT BE UP TO.

Steve says YOU KNOW WHO MADE IT UP THIS ALGORITHM?

Geoffrey says IT WAS INVENTED FIRST IN ABOUT 1970 BY SOME OBSCURE GUY. IT WAS REINVENTED BY LOTS OF PEOPLE AND THEN IN THE '80S WHEN COMPUTERS WERE FAST ENOUGH TO IMPLEMENT IT EFFECTIVELY, UM, PEOPLE STARTED USING IT AND SHOWING WHAT IT COULD DO. BUT COMPUTERS WEREN'T FAST ENOUGH TO MAKE IT REALLY IMPRESSIVE THEN, SO MAINSTREAM AI DIDN'T BELIEVE IN THIS ALGORITHM. WHAT HAPPENED A FEW YEARS AGO WAS COMPUTERS BECAME FAST ENOUGH AND SUDDENLY THIS ALGORITHM STARTED SOLVING ALL THE PROBLEMS THAT MAINSTREAM AI COULDN'T SOLVE LIKE RECOGNIZING SPEECH FOR EXAMPLE.

Steve says WOULD, WOULD WATSON, THE COMPUTER, FROM JEOPARDY WHO BEAT EVERYBODY, WOULD THAT BE PART OF WHAT WE'RE TALKING ABOUT HERE?

Geoffrey says THERE'S LITTLE BITS OF MACHINE LEARNING IN WATSON AND SOME OF THOSE BITS MAY WELL USE THIS ALGORITHM, BUT MOSTLY, IT'S HAND PROGRAMMING. IT'S A VERY IMPRESSIVE SYSTEM, BUT IT INVOLVES A HUGE AMOUNT OF HUMAN LABOUR TO MAKE IT WORK AND THE IDEA OF THESE ARTIFICIAL NEURAL NETWORKS IS YOU'LL TRY AND LEARN EVERYTHING.

Steve says I SUSPECT EVERYBODY KNOWS WHO WATSON IS, BUT ON THE CHANCE YOU DON'T, LET'S SHOW A CLIP AND REMIND EVERYBODY. HERE'S WATSON FROM JEOPARDY WHO WAS AWFULLY GOOD. ROLL THE CLIP, PLEASE.

A clip from the TV show "Jeopardy" plays on screen. A guest in his thirties says FINAL FRONTIERS FOR 1000.

A blue slate pops up on a screen and the host reads Tickets aren't needed for this 'event', a black hole's boundary from which matter can't escape.

The guest shrugs. Then, he and the other two guests on the show appear. The guest in the middle is actually a screen showing an animated sphere representing Watson the computer.

The host says WATSON?

Watson says WHAT IS EVENT HORIZON.

The host says YES.

Watson says LITERARY CHARACTER A.P.B. FOR 200.

The host, Alex Trebek, reads Wanted for a 12-year crime spree of eating King Hrothgar's warriors; officer Beowulf has been assigned the case.

The host says WATSON?

Watson says WHO IS GRENDEL?

The host says YES.

Watson says FINAL FRONTIERS FOR 200.

The host reads It's Michelangelo's fresco on the wall of the Sistine Chapel, depicting the saved and the damned.

The host says WATSON?

Watson says WHAT IS LAST JUDGMENT.

The clip ends.

Steve says YOU KNOW, IT'S AMAZING EITHER OF THE OTHER TWO GUYS GOT ANYTHING RIGHT, BUT I DID NOTICE THEY HAD SOMETHING THERE. OKAY, AGAIN, LET'S GO THROUGH THIS. HOW DOES THE ARTIFICIAL INTELLIGENCE IN WATSON COMPARE TO DEEP LEARNING?

Geoffrey says SO, THE MAIN DIFFERENCE IS IN DEEP LEARNING, YOU'RE TRYING TO LEARN EVERYTHING WITH NOBODY PROGRAMMING IT. THE ONLY THING THAT GETS PROGRAMMED IN YOUR COMPUTER SIMULATION IS THE LEARNING ALGORITHM. EVERYTHING INSIDE THIS NEURAL NET GETS LEARNED FROM DATA, NOT PROGRAMMED IN BY HAND.

Steve says SO, THEY'RE THINKING?

Geoffrey says UH, YES, YOU COULD SAY THAT.

Steve says I JUST DID.

Geoffrey says YES.

Steve says IS THAT ACCURATE THOUGH? IT'S INDEPENDENT THOUGHT IN SOME RESPECTS?

Geoffrey says YOU MIGHT IRRITATE SOME PHILOSOPHERS, BUT YES, I THINK THEY ARE THINKING.

Steve says UH, THAT'S VERY HEAVY. ALL RIGHT, LET'S... WE'VE GOT AN EXAMPLE OF THIS HERE. SHALL WE TRY THIS? I'VE GOT MY TRUSTY... DEVICE HERE. OKAY. THIS IS A GOOGLE TRANSLATE PROGRAM FOR THE IPAD WHICH APPARENTLY, CAN TRANSLATE SPANISH TO ENGLISH. THAT'S WHAT IT'S PROGRAMMED FOR RIGHT NOW. SHELDON, DO YOU WANT TO GET THE CAMERA ON THIS AND WE'LL TRY THIS?

Steve picks up an iPad and points at a sheet of paper on a table that reads "Hola."

He says WE'VE GOT HERE SOMETHING SAYS, "HOLA." SPANISH FOR "HELLO." NOW, LET'S SEE IF THIS IS... IT'S PROGRAMMED INTO HERE. WE'RE GONNA PUT THIS ON TOP, OH, AND IT'S HAPPENING ALREADY.

As holds the iPad above the sheet of paper, the image on the iPad changes automatically and instead of showing the word "Hola" it changes it to "hello."

Steve continues LOOK AT THAT. YOU PUT THE CAMERA ABOVE "HOLA." AND IT INSTANTLY TRANSLATES OVER AND OVER AGAIN TO "HELLO." CAN YOU WALK US THROUGH HOW THE NEURAL NETWORKS ARE... FIRST OF ALL, WHAT'S A NEURAL NETWORK? BECAUSE THAT'S WHAT'S AT PLAY HERE, RIGHT?

Geoffrey says OKAY, SO A NEURAL NETWORK IS A SIMULATION OF A WHOLE BUNCH OF NEURONS AND IT'S SOMETHING THAT LEARNS BY CHANGING THE CONNECTION STRENGTHS BETWEEN NEURONS.

Steve says AND IS THAT WHAT'S HAPPENING HERE?

Geoffrey says SO, FOR RECOGNIZING THE CHARACTERS, IT USES A NEURAL NET AND THAT NEURON THAT IS TRAINED ON LOTS AND LOTS OF CHARACTERS FROM LOTS OF DIFFERENT FONTS AND WITH LOTS OF DIFFERENT DISTORTIONS AND NOISE AND A NEURAL NET IS CURRENTLY THE BEST SYSTEM FOR BEING ABLE TO RELIABLY RECOGNIZE CHARACTERS THAT ARE DEFORMED AND NOISY.

Steve says NOW, DID THIS PROGRAM JUST TRANSLATE THAT BECAUSE SOMEBODY MADE A CODE TO CONSIDER EVERY POSSIBLE WORD IN SPANISH TO TRANSLATE, OR IS THIS THING THINKING?

The caption changes to "Thinking like a human."

Geoffrey says OKAY, FOR THIS PARTICULAR PROGRAM, I THINK CURRENTLY IT'S NOT USING NEURAL NETS TO DO THE TRANSLATION. IT'S USING NEURAL NETS TO DO THE CHARACTER RECOGNITION. BUT GOOGLE AND OTHER PEOPLE ALREADY HAVE NEURAL NETS DOING TRANSLATION AND THEY'RE DOING TRANSLATION, THEY'RE NOT BEING USED ONLINE AT PRESENT. AND WHEN YOU DO GOOGLE TRANSLATE, IT'LL LOOK AT PHRASES IN ONE LANGUAGE AND TRANSLATE THEM INTO PHRASES IN THE OTHER LANGUAGE AND IT HAS THIS HUGE TABLE. UM, BUT THERE'S A NEW WAY OF DOING MACHINE TRANSLATIONS THAT'S MUCH MORE INTERESTING THAT USES NEURAL NETS WHERE IT READS THE SENTENCE IN ONE LANGUAGE AND TURNS IT INTO A THOUGHT. THAT IS WHEN I SAY SOMETHING THAT EXPRESSES A THOUGHT AND OBVIOUSLY, THE WAY TO DO TRANSLATION IS TO FIGURE OUT THE THOUGHT BEING EXPRESSED IN THE FIRST LANGUAGE AND SAY THE SAME THING IN THE SECOND LANGUAGE. AND GOOGLE NOW HAS TRANSLATION SYSTEMS THAT WORK LIKE THAT. UM, THEY'RE ABOUT COMPARABLE WITH THE EXISTING TRANSLATING SYSTEM ON A MEDIUM-SIZED TRAINING SET. THEY'RE NOT QUITE AS GOOD AS THE EXISTING SYSTEM ON A REALLY BIG DATA SET YET, BUT THEY WILL BE. AND IN A FEW YEARS TIME, WE'LL BE DOING MACHINE TRANSLATION BY TAKE THE SENTENCE IN ONE LANGUAGE, TURN IT INTO A BIG PATTERN OF NEURAL ACTIVITY THAT IS THE THOUGHT BEHIND THAT SENTENCE AND THEN SAY THAT THOUGHT IN THE OTHER LANGUAGE.

Steve says CAN IT UNDERSTAND NUANCE WHEN IT SEES IT?

Geoffrey says IT UNDERSTANDS SOME NUANCE. AT PRESENT, IT CAN USE A LOT OF IMPROVEMENTS STILL. SO, THERE'S SOME THINGS WE CAN'T DO AT PRESENT LIKE IF I SAY TO YOU IN ENGLISH, "THE TROPHY WOULD NOT FIT IN THE SUITCASE, BECAUSE IT WAS TOO BIG." YOU KNOW THE "IT" REFERS TO THE TROPHY, BECAUSE IT WOULDN'T FIT. BUT IF I SAY, "THE TROPHY WOULD NOT FIT IN THE SUITCASE, BECAUSE IT WAS TOO SMALL," YOU KNOW THE "IT" REFERS TO THE SUITCASE AND THAT'S REAL-WORLD KNOWLEDGE AFFECTING HOW YOU TRANSLATE. NOW, IF YOU TRANSLATE FROM ENGLISH TO FRENCH, IN FRENCH, YOU CAN'T JUST SAY "IT." YOU HAVE TO CHOOSE THE GENDER.

Steve says YES.

Geoffrey says AND SO, WE CAN'T TRANSLATE THAT ENGLISH SENTENCE INTO A CORRECT FRENCH SENTENCE YET, BECAUSE YOU NEED REAL WORLD KNOWLEDGE TO DECIDE WHAT GENDER TO MAKE THAT IT. THAT WILL HAPPEN. I DON'T KNOW IF IT WILL HAPPEN IN A FEW YEARS OR IN TEN YEARS, BUT ONCE THAT HAPPENS, WE'LL KNOW THAT IT'S REALLY UNDERSTANDING.

Steve says AND IT CAN FIGURE OUT HOMONYMS WITHOUT ANY DIFFICULTY?

Geoffrey says STUFF LIKE THAT IS NO PROBLEM.

Steve says THAT'S EASY STUFF.

Geoffrey says IT'S THE USE OF COMPLICATED REAL WORLD KNOWLEDGE TO DISAMBIGUATE THINGS. AND IT'S BEGINNING TO BE ABLE TO DO IT, BUT IT CAN'T DO IT PROPERLY YET.

Steve says IS THERE ONE AREA IN PARTICULAR THAT YOU THINK DEEP LEARNING IS GOING TO CHANGE THE FUTURE?

The caption changes to "The future, today."

Geoffrey says UM, NO, I THINK IT'S GOING TO CHANGE THE FUTURE IN LOTS AND LOTS OF AREAS. LET ME GIVE YOU A FEW EXAMPLES.

Steve says YEAH.

Geoffrey says OVER THE LAST FEW YEARS, IT'S SORT OF BECOME THE METHOD OF CHOICE FOR RECOGNIZING SPEECH. UM, IT'S NOW BECOMING THE METHOD OF CHOICE FOR TRANSCRIBING SPEECH. THEY'RE GOING ALL THE WAY FROM THE SOUNDWAVE TO A TRANSCRIPTION OF WHAT'S SAID WITH JUST ONE NEURAL NETWORK THAT DOES EVERYTHING. IT'S GONNA BECOME THE METHOD OF CHOICE FOR MACHINE TRANSLATION. SUPPOSE YOU WANT TO DESIGN A NEW DRUG. YOU'D LIKE TO KNOW, UM... I GIVE YOU A BUNCH OF [unclear] MOLECULES AND YOU'D LIKE TO KNOW HOW WELL THEY'LL BIND TO SOME TARGET SITE. AND YOU'D LIKE TO PREDICT THAT RATHER THAN DOING THE EXPERIMENT BECAUSE IT'S MUCH CHEAPER TO DO A PREDICTION THAN AN EXPERIMENT AND THEN YOU ONLY EXPERIMENT ON THE ONES THAT HAVE PREDICTED TO WORK WELL. AND NEURAL NETS RECENTLY BECAME THE BEST METHOD OF DOING THAT. UM, IF YOU WANT TO IDENTIFY A PEDESTRIAN IN THE ROAD, A NEURAL NET IS DEFINITELY THE BEST METHOD OF DOING THAT. SO, IT'S ALL OVER. THESE NEURAL NETS, ESPECIALLY THE ONES USING THIS DEEP LEARNING ALGORITHM ARE GOING TO BE USED EVERYWHERE.

Steve says HOW MANY YEARS AWAY DO YOU THINK WE ARE FROM A NEURAL NETWORK BEING ABLE TO DO ANYTHING THAT A BRAIN CAN DO?

Geoffrey says I DON'T KNOW. IT'S VERY HARD TO PREDICT THE FUTURE BEYOND FIVE YEARS. I DON'T THINK IT'LL HAPPEN IN THE NEXT FIVE YEARS. BEYOND THAT, IT'S ALL IT'S ALL A KIND OF FOG, SO I'D BE VERY CAUTIOUS ABOUT MAKING A PREDICTION.

Steve says IS THERE ANYTHING ABOUT THIS THAT MAKES YOU NERVOUS?

Geoffrey says UM... IN THE VERY LONG RUN, YES. I MEAN, OBVIOUSLY, HAVING OTHER SUPER-INTELLIGENT BEINGS WHO ARE MORE INTELLIGENT THAN US IS SOMETHING TO BE NERVOUS ABOUT. IT'S NOT GOING TO HAPPEN FOR A LONG TIME, BUT IT IS SOMETHING TO BE NERVOUS ABOUT IN THE LONG RUN.

Steve says WHAT ASPECT OF IT MAKES YOU NERVOUS?

Geoffrey says WELL, WILL THEY BE NICE TO US?

Steve says IT'S JUST LIKE THE MOVIES. YOU'RE WORRIED ABOUT THAT SCENARIO IN THE MOVIES...

Geoffrey says IN THE VERY LONG TERM, YES.

Steve says WHERE THEY TURN ON US.

Geoffrey says I THINK OVER THE NEXT FIVE OR 10 YEARS, WE DON'T HAVE TO WORRY ABOUT IT. UM, ALSO, THE MOVIES ALWAYS PORTRAY IT, UM, AS AN INDIVIDUAL INTELLIGENCE. I THINK IT MAY BE THAT... IT GOES IN A DIFFERENT DIRECTION WHERE WE SORT OF DEVELOP JOINTLY WITH THESE THINGS. SO, THE THINGS AREN'T FULLY AUTONOMOUS, THEY'RE DEVELOPED TO HELP US. THEY'RE LIKE PERSONAL ASSISTANTS AND WE'LL DEVELOP WITH THEM AND IT'LL BE MORE OF A SYMBIOSIS THAN A RIVALRY. BUT WE DON'T KNOW.

Steve says IS THAT AN EXPECTATION OR A HOPE?

Geoffrey says THAT'S A HOPE.

Steve says THAT SOUNDS LIKE MORE A HOPE THAN AN EXPECTATION. LET ME READ THIS TO YOU. THIS IS FROM A PIECE IN THE DAILY BEAST LAST DECEMBER BY G. CLAY WHITTAKER TALKING ABOUT THE YEAR AI TOOK THE WHEEL.

A quote appears on screen, under the title "The year that A.I. took the wheel." The quote reads "Artificial intelligence did more than look at algorithms this year, and while we've heard about super computers and quantum computing for years, this is the first time that any of that lightning-fast, thinking-out-an-answer tech started sharing the roads, the roofs, and the responsibilities with you and me. And people are split over whether that was a good thing." Quoted from G. Clay Whittaker, The Daily Beast (December 13, 2015.)

Steve says I DO WANT TO PURSUE THIS, YOU KNOW, THIS NOTION OF EXPECTATION VERSUS HOPE. YOU HOPE IT'LL ALL WORK OUT WELL, BUT IN THE LONG RUN, I SENSE YOUR EXPECTATION MAY NOT BE QUITE AS BENIGN. IS THAT FAIR TO SAY?

Geoffrey says I THINK IT'S VERY, VERY HARD TO KNOW WHAT WILL HAPPEN BEYOND A FIVE-YEAR HORIZON, SO MY STATE OF MIND IS I JUST DON'T KNOW WHAT'S GOING TO HAPPEN. UM, I THINK... TRYING TO STOP THE TECHNOLOGY WILL BE VERY HARD. I MEAN, IF YOU LOOK AT AUTOMATIC TELLER MACHINES, MY GUESS IS BACK WHEN THEY WERE INTRODUCED PEOPLE COMPLAINED ABOUT THEM PUTTING BANK TELLERS OUT OF WORK. BUT I THINK NOBODY NOW WOULD SAY THEY WERE A BAD IDEA.

Steve says EVEN BANK TELLERS?

Geoffrey says EVEN BANK TELLERS. I MEAN, THEIR JOBS ARE MORE INTERESTING BECAUSE THEY DEAL WITH THE TRICKY CASES RATHER THAN YOU JUST WANTED TO TAKE 20 dollars OUT.

STEVE SAYS RIGHT.

Geoffrey says UM, SO IT'S CLEAR THAT THAT TECHNOLOGY IS A FORCE FOR GOOD. UM, WHETHER A TECHNOLOGY IS A FORCE FOR GOOD OR FOR BAD DEPENDS A LOT ON THE POLITICAL SYSTEM OR WHAT THE POLITICAL SYSTEM DECIDES TO DO WITH IT.

Steve says THAT'S WHAT I WANTED TO FOLLOW UP ON, BECAUSE CLEARLY THINGS IN SO MANY DIFFERENT AREAS OF LIFE ARE CHANGING SO QUICKLY. FASTER THAN OUR POLITICAL SYSTEMS ARE DESIGNED TO MAKE RULES AND LAWS AROUND THEM, SO HOW DEEPLY INVOLVED DO YOU THINK POLITICS HAS TO BE OR GOVERNMENTS HAVE TO BE IN ORDER TO DEAL WITH THE CHANGES THAT ARE COMING IN THIS SECTOR?

The caption changes to "Guiding A.I."

Geoffrey says THEY'RE GOING TO HAVE TO BE INVOLVED, SO IF YOU JUST TAKE DRIVERLESS CARS. IT'S PRETTY CLEAR TO EVERYBODY IN THE INDUSTRY, I THINK, THAT DRIVERLESS CARS WILL SAVE A WHOLE LOT OF LIVES, BUT THE POLITICIANS ARE TERRIFIED OF THE FIRST TIME A DRIVERLESS CAR RUNS SOMEBODY DOWN. SO, POLITICALLY, IF A DRIVERLESS CAR KILLS A FEW PEOPLE, BUT SAVE TENS OF THOUSANDS OF PEOPLE, THAT'S A PROBLEM FOR THE POLITICIANS, BUT THEY SHOULD JUST FACE UP TO IT AND SAY, LOOK, THESE THINGS ARE GOING TO MAKE US MUCH SAFER.

Steve says IT'LL TAKE A BRAVE POLITICIAN TO SAY, "I KNOW TWO PEOPLE WERE KILLED, BUT HERE'S THE 10,000 WE SAVED." YOU CAN'T SEE THE 10,000 SAVED, YOU CAN CERTAINLY SEE THE TWO KILLED.

Geoffrey says YEAH, AND THERE'S GOING TO BE A LOT OF THAT. BUT IT'S VERY CLEAR THAT DRIVERLESS CARS ARE GOING TO BE A GOOD THING.

Steve says UH, OKAY. SO IN CONCLUSION, WHAT KIND OF IMPACT DO YOU HOPE DEEP LEARNING HAS ON OUR FUTURE?

Geoffrey says I HOPE THAT IT, FOR EXAMPLE, ALLOWS GOOGLE TO READ DOCUMENTS AND UNDERSTAND WHAT THEY SAY AND SO RETURN MUCH BETTER SEARCH RESULTS TO YOU, SO YOU CAN SEARCH BY THE CONTENT OF THE DOCUMENT RATHER THAN THE WORDS IN THE DOCUMENT. I HOPE IT'LL MAKE FOR INTELLIGENT PERSONAL ASSISTANTS WHO CAN ANSWER QUESTIONS IN A SENSIBLE WAY AND HAVE A SENSIBLE CONVERSATION AS OPPOSED TO A CONVERSATION THAT KEEPS GETTING DERAILED. IT'LL GIVE US DRIVERLESS CARS. THAT'S CLEARLY GOING TO COME FAIRLY SOON. IT'LL MAKE COMPUTERS MUCH EASIER TO USE, I THINK. BECAUSE YOU'LL BE ABLE TO JUST SAY TO YOUR COMPUTER, "HOW DO I PRINT THIS DAMN THING," AND THE COMPUTER WILL DO IT RATHER THEN YOU HAVE TO FIGURE OUT ALL THESE COMMANDS.

Steve says SO, IT SHOULD MAKE, IF YOU ARE RIGHT, IT SHOULD MAKE OUR LIVES BETTER.

Geoffrey says YES. IT SHOULD BE JUST LIKE AUTOMATIC TELLER MACHINES WHICH MAKE THAT LITTLE BIT OF LIFE BETTER, BUT IT SHOULD DO THAT FOR A LOT OF THINGS.

STEVE SAYS FINGERS CROSSED.

Geoffrey says YES.

The caption changes to "Producer: Katie O'Connor, @KA_OConnor."

Steve says GEOFFREY HINTON, IT'S GOOD OF YOU TO JOIN US AT TVO TONIGHT. THANKS SO MUCH.

Geoffrey says THANK YOU.

An animated slate reads "The Agenda in the Summer."

The clips end and Nam stands in the studio alone. She says And that's it for tonight's AGENDA IN THE SUMMER. TOMORROW, A CONVERSATION WITH RENOWNED JOURNALIST TED KOPPEL AND HIS BOOK SOUNDING THE ALARM ABOUT CYBER THREATS TO CRITICAL INFRASTRUCTURE. I'M NAM KIWANUKA. THANKS FOR WATCHING TVO AND FOR JOINING US ONLINE AT TVO.ORG. AND WE'LL SEE YOU AGAIN TOMORROW.

The Announcer says The Agenda in the Summer WITH NAM KIWANUKA IS MADE POSSIBLE THROUGH GENEROUS PHILANTHROPIC CONTRIBUTIONS FROM VIEWERS LIKE YOU. THANK YOU FOR SUPPORTING TVO'S JOURNALISM.

A slate reads "Connect with us. YouTube, Twitter, Facebook, Instagram, AgendaConnect@tvo.org."

Music plays as the end credits roll.

Logos: Unifor Local 72M. Canadian Media Guild.

Copyright The Ontario Educational Communications Authority 2021.

Watch: What Are Algorithms Good For?