Clocks and Time. So cool and mysterious. But have you ever wondered how clocks work and how time was digitalized in the first place?
For centuries, time measurement was a local concept, with each place having its own standard. This was manageable in the 19th century, when international travel was limited and primarily done by ships and waterways. Navigation relied on the stars and tools such as sextants to keep track of time, distance, and routes.
However, with the advent of faster modes of transportation and increased international travel, a more standardized time measurement was needed. The solution came in the form of GMT (Greenwich Mean Time), a standard time measured at the Royal Observatory in Greenwich, London. All other times around the world were then based on this reference point.
This concept of a unified standard for time measurement was truly revolutionary and changed the way we understand and track time. It made it easier for people to coordinate their schedules and activities across borders and time zones, and laid the foundation for our current globalized world.
The UNIX Time
With this understanding of the history of computer programming and the evolution of languages, we now turn our attention to the 1970s, a seminal moment in the development of the field. This was the year that the C programming language was introduced, a revolutionary tool that promised to simplify programming and standardize syntax and libraries. One of the key concepts that emerged in this era was that of UNIX Time, a standard format for measuring time that is widely used in compilers, interpreters, and databases.
UNIX Time is based on the simple principle of measuring time in seconds, which can then be broken down into years, months, days, and smaller units. This makes it a versatile and widely-used standard for timestamps and other time-related applications.
To better understand how UNIX Time works, we will now demonstrate how to write a program in C that calculates the hours, minutes, and seconds from a given number of seconds
#include <stdio.h>
int main(int argc, char const *argv[])
{
int seconds, minutes, hours;
scanf("%d", &seconds);
hours = seconds / 3600;
minutes = (seconds % 3600) / 60;
seconds = (seconds % 3600) % 60;
printf("%d:%d:%d", hours, minutes, seconds);
return 0;
}
Now, if the user enters let's say 3661
as the input, the hours is 3661/3600 = 1
, the minutes as 3661 % 3600 = 61 / 60 = 1
and seconds as 3661 % 3600 = 61 % 60 = 1
.
So this is what happened on 00:00:00 UTC on 1 January 1970
. An international clock was encoded using a similar C program to keep track of the world's clocks.
32 Bit?
Now let's get back to some basics. The way how seconds were stored was a signed int. For a 32-bit system running C, the range of integers a signed int can store before its stack overflows is from -2,147,483,647 to 2,147,483,646
. This is:
$$[ -2^{31}, 2^{31} -1]$$
So, our system would be fine until it interprets the seconds as 2,147,483,646
. Once this happens, the integer stack overflows and it resets itself to: -2,147,483,647
. Now the clocks would interpret this as 20:45:52 on Friday, 13 December 1901
. This would mean we would go back nearly 138 years.
This is the 03:14:07 on Tuesday, 19 January 2038
problem, popularly known as the 2038 problem. When the clocks strike this exact time, the integer stack overflows, and we got back 138 years. It is quite funny if you think of it. Below is a representation of it from wiki media:
Credit: Wiki media [https://en.wikipedia.org/wiki/Year_2038_problem]
This is the Y2K bug, a bug that is quite funny yet has the potential to wreck our modern architecture in a second.
Vulnerable Systems
Any system using data structures with 32-bit time representations has an inherent risk to fail.
File systems (many file systems use only 32 bits to represent times in nodes)
Binary file formats (that use 32-bit time fields)
Databases (that have 32-bit time fields)
Database query languages (such as SQL) that have
UNIX_TIMESTAMP()
-like commands
The Fix
Now, there are many fixes available, but I personally like a few listed below:
64-Bit Systems: One fix is to upgrade to 64-bit systems. This would virtually double our capacity and help us go on for a few more years in peace.
Java: The java's signed 64-Bit integer system has far more capacity than C and if upgraded to it, we buy yourself 298 million more years. Seems cool, but it would be a chore to change all our current architecture to the JVM.
Unsigned Int: Now this is the easiest to implement system. We can change the signed int to unsigned int. This doubles the capacity in the positive direction and would be enough for a few more years before we come across this problem. However, this is quite short-sighted, while being quite easy to implement. This would give us time till
06:28:15 UTC on Sunday, 7 February 2106
.
Implemented Solutions
Ruby fixed the 2038 issue in their interpreter.
Linux started supporting 64-Bit time in both it’s 64-Bit and 32-Bit systems to help embedded systems from kernel version 5.6.
FreeBSD uses the same 64-Bit time as well
[Can't find windows here, can you? Linux is the best 😎]
MySQL 8.0.28 started supporting the functions
FROM_UNIXTIME()
,UNIX_TIMESTAMP()
to handle 64-bit values on platforms that support them. Built-in functions likeUNIX_TIMESTAMP()
will return 0 after 03:14:07 UTC on 19 January 2038.
Something to Fear?
Critical computer bugs are not a new thing in our world, and we already went through something similar called the 2000 Problem or the millennial bug. But, that's for another time.
Credits
Sources:
The Year 2038 Problem: Wikipedia: https://en.wikipedia.org/wiki/Year_2038_problem
is Year 2038 the Problem: The Guardian: https://www.theguardian.com/technology/2014/dec/17/is-the-year-2038-problem-the-new-y2k-bug
Help:
Text Optimization: ChatGPT: https://chat.openai.com
C Code: https://gist.github.com/newtoallofthis123/38d01cc5c579299aa6d919401b10683b
Provider: Hashnode
Author:
Ishan Joshi (NoobScience)
Time: 13:32 13-01-2023 IST
Contact: noobscience@duck.com