4 stars based on
Because the same timestamp can refer to two distinct instants of time around a leap binary date time stamp, it is neither a linear measure of time nor a true representation of UTC. On systems where the representation of Unix time is as a signed bit number, the representation will end after the completion of 2,, 2 31 - 1 seconds from This is referred to as the " Year problem " where the bit signed Unix time will overflow and will take the actual count to negative.
Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalar real number which represents the number of seconds that have passed since the beginning of As is standard with UTC, this article labels days using the Gregorian calendarand counts times within each day in hours, minutes, and seconds.
Some of the examples also show International Atomic Time TAIanother time scheme, which uses the same seconds and is displayed in the same format as UTC, but in which every day is exactly seconds long, gradually losing synchronization with the Earth's rotation at a rate of roughly binary date time stamp second per year. Unix time is a single signed integer number which increments every second, without requiring the calculations to determine year, month, day of month, hour and minute required for intelligibility to humans.
Modern Unix time is based on UTC, which counts time using SI seconds, and breaks up the span of time into days almost always 86, seconds long, but due to leap seconds occasionally 86, seconds. This extra second keeps the days synchronized with the rotation of the Earth, per Universal Time.
The Unix epoch is the time There is a problem with this definition, in that UTC did not exist in its current form until ; this issue is discussed below.
For brevity, the remainder of this section uses ISO date format, in which the Unix epoch is T The Unix time number is zero at the Unix epoch, and increases by exactly 86, per day since the epoch. This can be extended backwards from binary date time stamp epoch too, using negative numbers; thus T Within each day, the Unix time number is calculated as in the preceding paragraph at midnight UTC On dates before the epoch the number still increases, thus becoming less negative, as time moves forward.
The above scheme means that on a normal UTC day, which has a duration of 86, seconds, the Unix time number changes in a continuous manner across midnight.
For example, at the end of the day used in the examples above, the time representations progress as follows:. When a leap second occurs, so that the UTC day is not exactly 86, seconds long, a discontinuity occurs in the Unix time number.
The Unix time number increases by exactly 86, each day, regardless of how long binary date time stamp day is. If a leap second were deleted, the Unix time number would jump up by 1 where the leap second was deleted, which is the end of the day; no negative leap seconds have yet been implemented. When a leap second is inserted, which happens about every year and a half on average, the Unix time number increases continuously during the leap second, during which time it is more than 86, seconds since the start of the current day, and then jumps back by 1 at the end of the leap second, which is the start of the next day.
When a positive leap second binary date time stamp, the Unix time numbers repeat themselves. The Unix time number In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in time at all.
A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol NTP.
See the section below concerning NTP for details. When dealing with periods binary date time stamp do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time.
This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.
A Unix time number is easily converted back into UTC by taking the quotient and modulus of the Unix time number, modulo The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, binary date time stamp generates an equally invalid UTC time.
If these conditions are significant, it is necessary to binary date time stamp a table of leap binary date time stamp to detect them. Commonly a Mills binary date time stamp Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number.
The time number initially decreases binary date time stamp a leap should have occurred, binary date time stamp then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap.
A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Collecting the leap second state variable along with the time number allows for unambiguous decoding, binary date time stamp the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format. The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface.
Binary date time stamp requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.
This makes time interval arithmetic much easier. In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time ; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based time stamps and local time.
Conversion also runs into definitional problems prior to the commencement of the current form of UTC see section UTC basis below. This TAI-based system, despite its superficial resemblance, is not Unix time. A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems.
However, certain binary representations of Unix times are particularly significant. Being 32 bits means that it covers a range of about years in total. The minimum representable time is Fridayand binary date time stamp maximum representable time is Tuesday One second after This milestone is anticipated with a mixture of amusement and dread—see year problem. This expands the times representable by approximately billion years in both directions, which is over twenty times the present age of the universe per direction.
If unsigned, its range in the future would be doubled, postponing the bit overflow by 68 years. However, it would then be incapable of representing times prior to the epoch. Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. These binary date time stamp provide a decimal -based fixed-point data format, which is useful for some applications, and trivial to convert for others.
The present form of UTC, with leap seconds, is defined only from 1 January onwards. Prior to that, since 1 January there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation.
Prior to there was no UTC, and prior to there was no widespread atomic timekeeping ; in these eras, some approximation of GMT based directly on the Earth's rotation was used instead of an atomic timescale. Fortunately, the fact that the Unix epoch predates the start of this form of UTC does not affect its use in this era: The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case.
Unix time is not a suitable way to represent times prior to in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use. As of [update]the possibility of ending the use of leap seconds in civil time is being considered. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC.
Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: In Unix-like operating binary date time stampdate is the command which will print or set the current time. The epoch also differed from the current value. The first edition Unix Programmer's Manual dated 3 November defines the Unix time as "the time since This yielded a range of about years, though with more than half the range in the past see discussion of signedness above.
As indicated by the definition quoted above, the Unix time scale was originally intended to be a simple linear representation of time elapsed since an epoch. However, there was no consideration of the details of time scales, and it was implicitly assumed that there was a simple linear time scale already available and agreed upon.
The first edition manual's definition does not even specify which time zone is used. Several later binary date time stamp, including the complexity of the present definition, result from Unix binary date time stamp having been defined gradually by usage rather than fully defined from binary date time stamp outset. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds.
Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other. The POSIX committee was swayed by arguments against complexity in the library functions, [ citation needed ] and firmly defined the Unix time in a simple manner in terms of the elements of UTC time.
Unfortunately, this definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make a leap year. Since the mids, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time.
This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocolto execute steps in the Unix time number whenever leap seconds occur. As the use of Binary date time stamp time has spread, so has the practice of celebrating binary date time stamp milestones. The events that these celebrate are typically described as " N seconds since the Unix epoch", but this is inaccurate. As discussed above, due to the handling of leap seconds in Unix time, binary date time stamp number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
Vernor Vinge 's novel, A Deepness in the Skydescribes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The " programmer-archaeologist " responsible for finding and maintaining usable code in binary date time stamp computer systems first believes that the epoch refers to the time when man first walked on the Moonbut then realizes that it is "the 0-second of one of Humankind's first computer operating systems".
From Wikipedia, the free encyclopedia. This section's factual accuracy is disputed. Relevant discussion may be found on Talk: Please help to ensure that disputed statements are reliably sourced. April Learn how and when to remove this template message.