I just finished a class on data structures that touched on algorithms. The book we used for the class explained Big-O notation as a way of describing the running time of an algorithm. So here's two examples, based on examples they used in the book:
Suppose you're looking for the smallest value in an unordered array. So you'd take the first element as the implied minimum and then compare it against every other element in the array. The running time, which I'll call T(n), would be n - 1, where n is the number of elements in the array.
For n = 100, T(100) = 100 - 1 = 99
For n = 500, T(500) = 500 - 1 = 499
For n = 1000, T(1000) = 1000 - 1 = 999
In the above example, though the running time could be specifically described as "making one less than the number of array elements" comparisons, the term which has the most influence on the run time is the number of elements in the array (n), so you'd say the algorithm for finding the minimum value above has O(n) running time.
As another example, let's looks at a selection sort on an unordered array. Here, the minimum value is found and then swapped with the value in the first position in the array. You'd then repeat, finding the next smallest value and swapping it with the value in the second position in the array, and so on until you get to the second to last element in the array.
The number of passes through the array is (n - 1) + (n - 2) + ... + 3 + 2 + 1. This is equivalent to n (n - 1) / 2 or n^2/2 - n/2, where n is the number of elements in the array (Admittedly, I don't know the specifics behind finding the second expression from the first. But this is just to illustrate how to express running time in terms of Big-O).
For n = 100, T(100) = 100^2/2 - 100/2 = 10,000/2 - 100/2 = 5,000 - 50 = 4,950
For n = 1000, T(1000) = 1,000^2/2 - 1,000/2 = 1,000,000/2 - 1,000/2 = 500,000 - 500 = 499,500
For n = 10,000, T(10,000) = 10,000^2/2 - 10,000/2 = 100,000,000/2 - 10,000/2 = 50,000,000 - 5,000 = 4,995,000
For the above, the dominate term is n^2/2. Get rid of the constant coefficient (2) and the result is n^2. So you'd say the running time for the selection sort is O(n^2).
I'm sure my examples are kind of lengthy and they're probably lacking in some way, but I hope it at least gives you a good idea on how to express the running time of an algorithm in terms of Big-O. Since you're taking a course in algorithms next semester, you'll learn all you need to know about Big-O notation and how to describe and evaluate algorithms in that class.