If you need to get caught up, the posts you'll want to go through before proceeding with this one are:
With a datasource created in unixODBC this last part is rather anti-climactic, but this is of course also where you can get some real work done in Python.
First, in a Python console ('python' from a terminal or whatever your favorite Python console tool is), import pyodbc:
>>> import pyodbc
Next, create a connection to the datasource using the datasource name, which in this case we'll assume is foo:
>>> conn = pyodbc.connect('DSN=foo;UID=username;PWD=password')
If you don't get any errors, the connection was successful. Next, create a cursor on that connection:
>>> cursor = conn.cursor()
Now we can execute a query against SQL Server using that cursor:
>>> cursor.execute("SELECT * FROM bar")
If the query ran successfully, you'll see that a pyodbc.Cursor object is returned:
<pyodbc.Cursor object at 0x15a5a50>
Next, and this is is not the most efficient way to do things (see below) but is good for demonstration purposes, let's call fetachall() on the cursor and set that to a variable called rows:
>>> rows = cursor.fetchall()
This returns a list of pyodbc Row objects, which are basically tuples.
Finally, we can iterate on the Cursor and output the results of the query:
for row in rows:
This will output each row so you can see the results of your handiwork.
That's all there is to it!
Of course there are a lot of ways to get at the individual column values returned from the query, so be sure and check out the pyodbc Getting Started wiki for details.
Note that there are numerous considerations depending on the volume and nature of data with which you're dealing. For example if you have a large amount of data, using fetchall() isn't advisable since that will load all the results into memory, so you'll probably much more commonly want to use fetchone() in a loop instead since that's much more memory efficient.
You can learn all about the objects, methods, etc. involved with pyodbc in the pyodbc wiki.
Next up is pymssql!