Interestingly most computer languages in use today reserve 4 bytes of memory for a standard integer. This means the biggest number you can represent in it's entirety is plus or minus 2,147,483,648. After that you have to round off the least significant digits and use exponents which means you can have a loss of precision or use another work around.
64-bit architectures have enabled double length integer variables to be included in recent versions of C, C#, Java, SQL, Pascal, and probably others that I didn't look up. That limits us to 9,223,372,036,854,775,807 (18,446,744,073,709,551,615 unsigned, unless you're in Java, which apparently doesn't support unsigned int).
I didn't really know any of that. Had to ask Google. I also realize I'm being That Guy.