A quick look at the density / spatial pattern from geotweets (i.e. those from smartphones mainly)… filtered on keywords as detailed in the image.

Mainly North of the border then….

A quick look at the density / spatial pattern from geotweets (i.e. those from smartphones mainly)… filtered on keywords as detailed in the image.

Mainly North of the border then….

Open Terminal

cd to directory with sqlite database in it

sqlite3 dbname

source: http://stackoverflow.com/questions/6076984/how-to-export-the-results-of-my-query-to-csv-file-sqlite

cd to directory with sqlite database in it

sqlite3 dbname

```
sqlite> .headers on
sqlite> .mode csv
sqlite> .output test.csv
sqlite> select * from tbl1;
sqlite> .output stdout
```

source: http://stackoverflow.com/questions/6076984/how-to-export-the-results-of-my-query-to-csv-file-sqlite

Big-O notation is a relative representation of the complexity of an algorithm. These notes taken from these 3 sources:

http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o

http://bigocheatsheet.com/

http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o

**O(n**^{2}):

^{2}. Basically, n=10 and so O(n^{2}) gives us the scaling factor n^{2} which is 10^{2}.

**O(n):**

**O(log n):**

**O(1):**

If we add two 100 digit numbers together we have to do 100 additions. If we add**two** 10,000 digit numbers we have to do 10,000 additions.

See the pattern? The**complexity** (being the number of operations) is directly proportional to the number of digits *n* in the larger number. We call this **O(n)** or **linear complexity**.

___________________________________

Now if you were instructing a computer to look up the phone number for "John Smith" in a telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that you could guess how far in the S's started (let's assume you can't), what would you do?

A typical implementation might be to open up to the middle, take the 500,000^{th} and compare it to "Smith". If it happens to be "Smith, John", we just got real lucky. Far more likely is that "John Smith" will be before or after that name. If it's after we then divide the last half of the phone book in half and repeat. If it's before then we divide the first half of the phone book in half and repeat. And so on.

This is called a**binary search** and is used every day in programming whether you realize it or not.

So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 20 times. In comparing search algorithms we decide that this comparison is our 'n'.

In Big-O terms this is**O(log n)** or **logarithmic complexity**. Now the logarithm in question could be ln (base e), log_{10}, log_{2} or some other base. It doesn't matter it's still O(log n) just like O(2n^{2}) and O(100n^{2}) are still both O(n^{2}).

It's worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:

__________________________

###
Polynomial Time

Another point I wanted to make quick mention of is that any algorithm that has a complexity of **O(n**^{a}) is said to have **polynomial complexity** or is solvable in **polynomial time**.

O(n), O(n^{2}) etc are all polynomial time. Some problems cannot be solved in polynomial time. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn't, we couldn't use the public key systems we use.

http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o

http://bigocheatsheet.com/

http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o

- 1 item: 1 second
- 10 items: 100 seconds
- 100 items: 10000 seconds

- 1 item: 1 second
- 10 items: 10 seconds
- 100 items: 100 seconds

- 1 item: 1 second
- 10 items: 2 second
- 100 items: 3 seconds

- 1 item: 1 second
- 10 items: 1 second
- 100 items: 1 second

If we add two 100 digit numbers together we have to do 100 additions. If we add

See the pattern? The

___________________________________

Now if you were instructing a computer to look up the phone number for "John Smith" in a telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that you could guess how far in the S's started (let's assume you can't), what would you do?

A typical implementation might be to open up to the middle, take the 500,000

This is called a

So if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 20 times. In comparing search algorithms we decide that this comparison is our 'n'.

That is staggeringly good isn't it?

- For a phone book of 3 names it takes 2 comparisons (at most).
- For 7 it takes at most 3.
- For 15 it takes 4.
- …
- For 1,000,000 it takes 20.

In Big-O terms this is

It's worthwhile at this point to explain that Big O can be used to determine three cases with an algorithm:

__________________________

Best Case:In the telephone book search, the best case is that we find the name in one comparison. This isO(1)orconstant complexity;Expected Case:As discussed above this is O(log n); andWorst Case:This is also O(log n).

__________________________

O(n), O(n

Flyover of Chambers Street on a Digital Surface Model built from a LIDAR dataset.

Subscribe to:
Posts (Atom)