In these slides, we will focus specifically on the ability to run different parts of a program on two or more cores, which reduces the overall execution time of the program.
If a program, for example, contains repeating chunks of code (such as those in loops) that could run independently from one another — that is, a loop's iteration doesn't depend on the result from any of the previous iterations — we could run these chunks, each on its own CPU core, in parallel.
We call the action of running a chunk of a program's code on a core a thread of execution. Every program you ever get to run has $1$ or more threads of execution!
In our course's scope, we just introduce the general idea of threads, and now we'll explain how they contribute to a computer's performance improvement. If you want to learn a bit more about this topic, which is out of this course's scope, you are welcome to look into the lecture notes on the topic of threads, which were written for CISC 3320: Operating Systems course that you will take by the time you graduate and that will cover the topic of threads more extensively.