Essay, 24 pages (6000 words)

The history of metastability economics essay

Advertising We'll write a high-quality original custom paper on The history of metastability economics essay just for you with a 15% discount for the 1st order Use a Discount Use a Discount

In a synchronous system, data signals always meet input timing requirements of flip-flops, because the relationship between data and clock is fixed; therefore, metastability does not occur. Nevertheless, in most multiple clock systems and asynchronous systems, input data regularly violates the setup and hold timing conditions of bistable elements, because input data and clock switching rate and phase relationship is inconsistent. This violation results in delayed output signals and possibly leads to metastable outputs, which add further delay to produce a valid and stable output value, logic ‘ 1’ or logic ‘ 0’. Therefore, it is important to carefully analyse and design bistable elements prone to metastability for minimum metastability time without impacting performance. Metastability is a hazardous anomalous phenomenon that can take place in any bistable or sequential circuits, particularly more often in synchronizers and arbiters. It is known to be an unstable equilibrium voltage point in between the valid voltage logic levels (0 and VDD) and usually around ½VDD. That voltage point is equivalent to the middle voltage which is the switching/inverting point of the gates composing the bistable circuit. If the bistable circuit has a long feedback path, then metastability develop into oscillation around the middle voltage, which was observed in some of the obsolete technologies, such as TTL NAND gates composing a set-reset latch [30]. In symmetry to a bistable latch, a ball transiting over a hill has two stable points at the bottom of either side of the hill and one metastable point at the top of the hill. If the ball transition force is not enough to cross the hill, then it will fall back to the bottom, but if it is enough to cross the hill, the ball will fall to the other stable point, whereas if the force is only enough to reach the top of the hill, then the ball will stay still unless there is a disturbance in the environment due to wind for example. The most cause of recurring metastability events is the conflict between incoming signals with the timing restrictions. Likewise, metastability may be initiated by the resolution of a previous one in a preceding sequential stage violating next stage timing conditions, which is known as the back edge of the clock effect [44]. For example, if a master latch in master-slave flip-flop exhibits metastability that holds the master latch a long time and resolves near the slave-latch timing condition at the back edge of the clock; this instantiates new metastability event in the slave-latch needing more time to recover. Additional circumstances, metastability may be transferred between logic gates or from master latch to slave latch is not designed cautiously. Also, metastability may also occur due to a very short pulse gated clock or even a poorly timed clear or reset signal [44]. Moreover, in the occasion of a single upset event due to alpha particles a drawn current spike could take a long time enough to flip a cell or induce metastability. However, this thesis only concentrates on metastable events that are caused by asynchronous inputs signals from an asynchronous system or a differently clocked system because it is significantly the most frequent recurring cause of metastability and considered one of the most difficult problems to deal with in synchronization. In case of metastability at the output of a latch driving some logic stage, the subsequent logic stage will behave unpredictably and some may interpret this invalid voltage level as a logic one while another as a logic zero. As a result, metastability may produce failures appearing as of data being lost, corrupted or duplicated, which causes a system failure and in particular circumstances a system deadlock. In a multiple clock system, metastability is unavoidable, but there are several design techniques to reduce the chance of failures due to metastability. This section provides a deep overview of metastability behaviour and analysis in bistable elements in general, and specifically as synchronizers and arbiters, followed by metastability scaling impact and reduction techniques. Error: Reference source not foundshows simulation waveforms of the output of a simple latch going metastable at different data arrival times, stepped at 1ps closer to the falling edge of the clock. The flip-flop spends a stretched period of time virtually balanced at an unstable point of equilibrium between the two stable states. If metastability in the flip-flop is at the exact balance point, it could remain at this point for indefinite length of time. Since the probability of staying at that balance point approaches zero, in theory, there is a great chance this will hardly happen. However, as long the operating clock frequency increases there is a growing slim possibility that the flip-flop will continue to be remain metastable for indefinite time.

Metastability Behaviour and Analysis

To comprehend the behaviour of metastability, a simple latch composed of two back to back symmetric inverters is used to explain and analyse the nature of metastability.

Large signal analysis

Large signal analysis determines the metastable dc voltage level of the latch. The dc voltage-transfer characteristics of both inverters can be superimposed on each other as shown in Figure. This graph shows three intersect points two stable one the sides and one unstable in the middle. The stable points signify inversion of voltage A and voltage B from 0 to VDD and vice versa. For instance regarding inverter 1, stable0/1 is for inversion from low to high, and the opposite for stable1/0. The unstable point denotes the middle voltage Vm at which inversion occurs and metastability upheld, and in case of matching inverters it is the same value for both. The metastable voltage dc level can be computed by the switching voltage formula of an inverter [10]. The mid-voltage point is considered the balanced point of equilibrium as depicted by the bell shape, if both latch nodes reach the mid-voltage point simultaneously, which diminishes the push-pull force of the inverters. The variable rm is ratio between NMOS and PMOS transistors based on the square law model in Error: Reference source not found and based on velocity saturation model in (5c). This ratio rm is an important factor to define the inversion and metastable point to a lower or a higher voltage. For instance, the inverters have all transistors with similar absolute threshold voltages, and if their ratios equal to one, then the metastable level is VDD/2, whereas, ratios above 1, lowers the metastable point below VDD/2, while ratios smaller than 1, lifts the metastable point over VDD/2. Since process parameters and are technology dependent then the only design parameters available to skew the inversion point is transistors width ratio. During the event of metastability the power consumption is enlarged because all latch transistors are actually turned on creating a short circuit path from the supply voltage to the ground and continuously draw large currents until that event resolves.

Small signal analysis

Small signal analysis determines the time dynamics of metastability behaves in the latch in order to characterize the length of an event. Literature showed two different models to analyse the timing behaviour of metastability. It is achieved under the assumptions that nodes A and B are at the metastable d. c. level at time t = 0. A simple method verified in [44] represented each inverter gate by a linear amplifier model composed of a voltage amplifier with gain –A, in series with a resistance and a capacitance. Another simple model appeared in [52] included second order effects and based on a two-port transconductance amplifier with an output resistance, an output capacitance and a miller capacitance. The miller effect accounts for the capacitance between input to output, which is the sum of gate-to-drain capacitances of PMOS and NMOS transistors in an inverterThe output capacitance, output resistance, transconductance and miller capacitance of inverter 1 are computed as in equations () to (Error: Reference source not found), and similarly for inverter 2. The evaluation of absolute voltage gain | A1| and bandwidth of inverter 1 can be derived in a similar manner to that of the push-pull inverting amplifier based on [53]. They are estimated using equations (Error: Reference source not found) and (Error: Reference source not found) and the gain-bandwidth product in (Error: Reference source not found).

Linear Amplifier Latch Model

The voltage amplifier model of the latch depicted in Fig. is analysed in the following system of differential equations. Suppose the inverters parameters are identical; that is Cout= C1= C2, A= A1= A2, and Rout= R1= R2, and the inverters have high gain (A≫1) then results in Error: Reference source not found which is equivalent to (Error: Reference source not found). Solving the equations for the differential-mode voltage VDM= VA  VB ; Solving the equation for the common-mode voltage VCM= (VA + VB)/2; The voltage at node A can be written as: The values of Vdm0 and Vcm0 are determined by the initial conditions before metastability is initiated. This model is only valid within the linear region around the metastable level. The common-mode voltage is an exponentially decaying term that diminishes quickly and can be ignored, whereas the differential-mode voltage is an increasing exponential term represents response.

Miller Effect Latch Model

Using nodal analysis to find the node voltages VA and VB gives the following equation. Assuming symmetric inverter parameters, namely Cout= C1= C2, gm= gm1= gm2, CM= CM1= CM2 and Rout= R1= R2, and supposing the transconductance at the metastable level is much greater than the output conductance and the output capacitance is greater than the miller effect. The gives a similar time constant value to (Error: Reference source not found) and (Error: Reference source not found). Solving the equations for the differential-mode voltage VDM= VA  VB ; Solving the equation for the common-mode voltage VCM= (VA + VB)/2; The voltage at node A can be written as: In contrast to equation (Error: Reference source not found), the miller capacitance has significant effect on the differential-mode voltage because it increases the time response constant and in turns increases the time required for absolute VDM to increase beyond metastable region.

Failure Rate (MTBF)

The probability of a flip-flop to be metastable for some time tR or longer is equivalent to the probability of entering metastability times the probability of exiting it [44]. First, the probability of the flip-flop it will enter metastability, if input data and clock transitions occur close together within a time window of Tw (metastability window), and under the assumption of uncorrelated average switching frequencies fd and fc of the input data and clock signals, is equivalent to . The second part is the probability of the flip-flop will exit metastability after time tR is equal to , where  is metastability recovery time constant, which indicates the strength and speed of a flip-flop resolves metastable events. Therefore, the product of these probabilities defines the failure rate at which a flip-flop fails, and the inverse of the failure rate is the Mean Time Between Failure (MTBF) [30] [44] as shown in formula Error: Reference source not found. The formula of MTBF is an important figure of merit to assess the reliability of flip-flops to operate as synchronizers. It has been confirmed in theory and in experiment in [44, 54-59] and improved in [56]. In general, a flip-flop metastable failure occurs when input data transition violates the setup and hold times of the flip-flop. The failure rate of a flip-flop is not a guaranteed matter; it is only a good estimation of the reliability of a flip-flop based on the probability of input violations and the probability of resolving the metastability.

Technology Scaling

Generally, metastability behaviour is a function of process technology and environment, because as process technology scales down the metastability resolution time decreases [30, 60]. This is because the metastability resolution time is directly proportional with capacitance (which reduces with scaling down) and inversely the gain-bandwidth product (increases with scaling down). In a similar manner to the propagation delay, metastability resolution time increases significantly with reduced from nominal supply voltages [60, 61] and increased load capacitance [54], especially under low temperatures [61]. This is because low supply voltage reduces the drain current and hence reduces the transconductance. Low temperatures shifts the threshold voltage up which in turns reduces the current as well. On the other hand, increasing the load capacitance adds more demand on charges to be charged by the supplied drain current [62]. Process parameter variations have a considerable impact on metastability time response and window [63]. To observe the metastability dependence on technology in an inverter based latch, the formula (Error: Reference source not found) of the metastability resolution time constant  is further derived down to process parameters, assuming the dominant parasitic capacitance is the gate-to-source capacitance, miller effect and output resistance are negligible, and at presence of identical load inverters equivalent to latch. Equation (Error: Reference source not found) shows the dependence of  on the channel length L of transistors and the velocity saturation. From Table 2. 1 and Equations (Error: Reference source not found) and (Error: Reference source not found), the impact of scaling on the metastability resolution time constant is derived in (Error: Reference source not found) and (Error: Reference source not found).

Metastability Impact Mitigation

In order to reduce the impact of metastability on bistable circuit, different resolving techniques are presented in literature, which are described in the following headings.

Latch Sizing and Loading

In normal operating conditions, the strength of metastability in any latch is primarily dependent on the size of the latch and the total capacitive load it is driving. In order to resolve metastability faster; the latch needs to have stronger transistors and driving small loading capacitance. Also, the ratio of the transistors in the inverters contributes to the behaviour of metastability. From equation (Error: Reference source not found), the metastability resolution time constant  is an important factor in the flip-flop reliability. As shown previously in section , the value  of a cross-coupled inverters was modeled and analysed, and showed that it is equivalent to the inverse of the gain-bandwidth-product of the cross-coupled inverters at the metastable d. c. level, which is approximated to the total node output capacitance plus the miller capacitance and all divided by the logic-gate total transconductance. Assuming the load inverter size is equivalent to the latch inverter size times a ratio LL , defined as the load width to latch width ratio (WLoad/WLatch). Equation Error: Reference source not found below shows approximately the effect of load to latch size directly effects the value of . If the load is larger, then  is higher, and vice versa. In the case where a crossed-coupled inverter latch enters metastability, the time needed to resolve its metastability is directly dependent on the value of . The larger value of , the longer time needed for metastability resolution, and the smaller value of  value, the shorter time needed for resolution.

Extending Resolution time

In general, the available metastability resolution time is not a design factor in synchronous systems and it is determined based on system requirement. In a single flip-flop, the available resolution time or settling time [44] [64] is mainly dependent on the remainder of the clock cycle TC after subtracting the clock-to-output delay tCQ of the flip-flop and the setup time tSU of the following stage and any combinational logic delay in between, which can be interpreted as the ” lost time”. This is written in the following equation. In the context of using flip-flops as a synchronizer, the available metastability resolution time becomes a design factor to improve the reliability. Because, as the value of tR is increased the value of the MTBF exponentially increases of that flip-flop. In order to design for longer resolution time to do this, depending on the design requirements, there are three approaches based on equation (Error: Reference source not found) as follows; reduce the clock frequency if the design requirement allows to, reduce the lost time by replacing the flip-flop with a faster one and connecting its output directly to the next clocked stage, orincrease the number of cycles, by directly pipelining two or more flip-flops, without any logic insertions, to synchronize and increase the available metastability resolution time, then equation (Error: Reference source not found) could be rewritten as in (Error: Reference source not found) assuming identical flip-flop, where N is the number of flip-flops.

Metastability Filters

Filters of metastability have been used to prevent metastability from progressing to the next stage. In general, they are placed just after the latch outputs and basically transfer metastable levels as logic ‘ 1’ or logic ‘ 0’, which is the process of filtering. The simplest filter circuit is based on skewed-inverters that could have a low or high switching voltage (threshold) point VT, namely high-VT or low-VT inverting filters. Based on the threshold point the filter passes metastable levels as logic ‘ 0’ with low-VT inverter or as logic ‘ 1’ with high-VT inverter. This type of filters is custom designed and not available in FPGA or standard-cell library, but it can be formed in standard-cell designs using a four-input NOR-gate with all four inputs connected to the latch or flip-flop output [44]. Another full-custom circuit metastability filter commonly used with MUTEX and in this work, it is referred to as the exclusion-filter or just ‘ x-filter’. It is based on two subsequent inverters after the NAND gates output nodes, where each inverter has the PMOS source terminal connected to the input of the other inverter instead of VDD to sense the potential voltage difference between the output nodes of the NAND gates. In case of metastability, the filters PMOS devices remains inactive because their absolute gate-to-source voltage remained zero, as a result the output of this filter is held to logic zero until metastability resolves, that is NAND output voltages diverge enough that there is sufficient difference, more than the absolute threshold-voltage of one the PMOS device in order to activate one of the PMOS devices, then eventually one inverter output will rise logic high. Alternative metastability filtering circuit utilizes the hysteresis property of the voltage-transfer characteristic of Schmitt-Trigger inverters to filter out metastable levels, as presented in [65, 66]. The hysteresis is a shift in the threshold (inverting) voltage lower value towards zero volts if zero and higher towards VDD if the output is already logic ‘ 1’. In case of metastability, the Schmitt inverter sees the metastable level as the previous logic, unless the metastable level passes its threshold point at that time. Skewed-inverter filters are more sensitive to noise [44], but they surely can offer a faster transition during normal operation, unlike x-filters, although they can tolerate noise, it introduces more delay [44] because of its design requirements. Generally, in synchronous circuits, the use of filters tends to have a number of drawbacks. They add more propagation delay to deal with metastability and may not resolve it rapidly. They keep the outputs of bistable circuits clear from metastable levels but due to late resolved metastability, these outputs may violates the timing restrictions of the following sequential stage and initiate new metastability events, previously discussed as the back edge of the clock effect [44]. Although, filter let metastability resolve arbitrarily to any value zero or one, they are unsure if the new value is acceptable or require further circuitry to provide means error detections and correction.

Transconductance Booster Feedback

An alternative technique focuses on improving the resolution time of metastability rather than filtering it, especially at low supply voltages and temperatures. This technique utilizes two voltage controlled current sources, one on each output node of the latch. During the occurrence of metastability, both current sources are enabled to increase the transconductance of the metastable latch in order to enhance the metastability resolving time constant. The voltage controlled current sources can be replaced by PMOS transistors, as given in the robust synchronizer [61] and the boost synchronizer [67, 68], which showed the improvement towards low supply voltages and temperatures, compared to a simple latch without booster.

Metastability Error Detection/Correction Feedback

Alternative method to deal with metastability is to use a metastability detector by using combinational logic circuit as proposed in the Razor flip-flop [32]. As shown in Fig. 5R They used the node voltage of the slave latch to drive two skewed gates (one buffer and one inverter) in parallel outputting to the inputs of an AND gate; the buffer is a two inverters in series provided with large NMOS transistors in order to see metastable levels as a logic high input value, whereas the inverter has a large PMOS transistor observes metastability as logic ‘ 0’ input value. If both produce logic ‘ 1’, then the AND gate produce logic ‘ 1’ indicating that metastability has occurred. Another approach to detect metastability by using a circuit to sense transitions conflicts between the clock and the input data before it arrives to the input of the master latch. This is known as transition-detection approach, and applied in RazorII flip-flop [35] and in Transition-Detection-Time-borrowing (TDTB) latch [32]. The idea behind it basically comes from finding the transitions of both signals (clock and data) that coincide in time early enough before that cause any error progressing in the system, then flagging an error signal to either drop-out that bit or correct its value based on a shadow latch or flip-flop. Another technique relies on detection/correction of metastability events in master latch of a flip-flop using a fast combinational logic to detect the event and then route the output signal Q within the flip-flop cell to pull down or push up the metastable node depending on the state of Q.

Characterizing Generic Flip-Flops

Flip-flops are commonly designed as edge-triggered storage elements based on latches. They pass the input data at the clock rising or falling edge, holding the signal stable until the next edge triggers. Any transition at the flip-flop input propagates to its output only if it arrives earlier than the triggering edge of the clock cycle. In order to pass the data securely to the output, the input signal and the clock must satisfy some timing restrictions, commonly known as setup and hold times. These are key metrics in identifying the maximum clock of a digital system; therefore, they need to be precisely specified.

Delay Time

In general, a delay relation between transitions of two signals is measured from the 50% point of full voltage scale (VDD) of the transition of the first signal to the 50% point of full voltage scale (VDD) of the transition of the second signal. In a logic gate, the propagation delay is approximated to 0. 7×RC of the switched branch of the circuit that transit the output from low-to-high or high-to-low, where R is the resistance of the conducting branch, either to ground or VDD, and C is the total equivalent capacitance at the output node. For a flip-flop, the clock-to-output propagation delay (tCQ) is the time difference measured between the clock triggering edge and the output Q transition (from 50% VDD of the clock edge to 50%VDD of output transition), when the input data signal is stable near the clock edge, that is data-to-clock time difference (tDC) is wide enough and does not violate the setup and hold time restrictions. Besides the flip-flop configuration, the value of this delay is a function of the tDC, clock edge rise/fall time, supply voltage, temperature, process parameters and the output load [69]. In general, the delay through a flip-flop experienced by a rising input transition is different from by a falling input transition. The data-to-output delay is the delay measured from the input data transition to the output, if appropriate clocking is applied to the flip-flop. This delay is not a good metric of a flip-flop performance because of its dependence on the data arrival to clock edge offset time. It is considered good practice to estimate the data-to-output delay time (tDQ) for a flip-flop at the minimum allowed data-to-clock time.

Setup Time

Setup time (tSU) is commonly defined as the minimum allowed time between the input data transition and the triggering edge of the clock, in order to produce a valid output [10, 31]. It is characterized at the point where a specific increase in clock-to-output delay of a flip-flop caused by data getting closer to clock as shown in Error: Reference source not found. The increase in the offset beyond this point leads to an extended clock-to-output delay, and can even result in a flip-flop failure, if data arrives too late that the flip-flop is unable to record this input data transition. In general, setup times of storing logic ‘ 1’ is different from storing logic ‘ 0’. The setup time of logic ‘ 1’ is measured at data a rising edge, transition from ‘ 0’ to ‘ 1’, and the setup time of logic ‘ 0’ is measured at data falling edge, transition from ‘ 1’ to ‘ 0’. The right-hand curve in Error: Reference source not foundshows the input time values (data-to-clock time or tDC) plotted against the output time values (clock-to-output time or tCQ). This is plotted by recording measurements of both values (tDC and tCQ) taken at different points of arrival times of new input value approaching the clock edge. This plot gives a clear view of the regions of normal operation and failure of the flip-flop. Alternative definition of the setup time is the data-to-clock offset time which results in the minimal data-to-output delay [70]. Setup time value is dependent on the flip-flop formation, process parameters, the supply voltage, the temperature, and the simulation setup.

Hold Time

Hold time (tH) is defined as the minimum time interval for which the data signal must be kept stable at the input of a flip-flop after the clock triggering edge to maintain a stable and valid output value [10, 31]. If the input signal changes earlier before the setup time and then changes back to its previous state during or after the clock transition, the clock-to-output delay of the flip-flop will increase, as shown in Error: Reference source not found. In a similar manner to the setup time, hold time is evaluated at the input-to-clock that causes a 5% increment for typical application, or 10% increment for variability studies, above the minimum clock-to-output delay. The hold time for retaining logic ‘ 1’ is different from retaining logic ‘ 0’. The hold time of logic ‘ 1’ is measured at data rising edge, and the hold time of logic ‘ 0’ is measured at data falling edge. Similar to the setup time, the hold time is dependent on the flip-flop formation, process parameters, the supply voltage, the temperature, and the simulation setup.

Finding Setup and Hold Time

Typically, finding setup and hold times [10, 31, 70] is a binary search process and it necessarily requires comprehensive spice-level transient simulations of latches or flip-flops using accurate device models. From the definition of setup and hold times, the conventional method to finding their values is to set the time interval between the arrival of the input data signal and the clock signal [10, 31] and run simulation to measure the clock-to-output time, then repeat process with narrowed time interval until the targeted increase in clock-to-output time is reached. Accordingly, determining latch/flip-flop setup and hold times is computationally challenging process than finding delays of combinational circuits. Alternative direct method to estimate the setup and hold times of static flip-flops in one or two spice-level transient simulations was presented in [71] and partly in [72]. This method is based on measuring two path delays in the circuit; the first value is related to the transition of the data signal via data path within a flip-flop to a predefined internal node before the master or slave latch, whereas the second value is associated with the transition of the clock signal through clock path within the flip-flop to the predefined internal node before the master or slave latch. Setup time is computed by obtaining the difference between two delay time values; the first value is taken from the transition of the input data signal to the input of the gate guarding the slave latch; the second value is taken from the transition of the clock signal to the input of the gate guarding the slave latch. Also, hold time is computed by obtaining the difference between two delay time values; the first value is taken from the transition of the input data signal to the input of the gate guarding the master latch; the second value is taken from the transition of the clock signal external input to the input of the gate guarding the master latch.

Metastability metrics

The degree of significance of a metastability behaviour occurring in a latch or flip-flop can be predicted by obtaining the metastability resolution time constant and window for that circuit [44, 54, 55]. Dike and Burton [54] divided the delay of the flip-flop into two main regions, deterministic, and the true metastability. In the deterministic region, the delay is determined by the setup time. The flip-flop is not in a metastable state, but close to it, and it takes some extra time to resolve the state. In the true metastability region, the time for the flip-flop to enter one of its stable states is not deterministic. If metastability occurs in a flip-flop in a synchronous system, it has a limited amount of time to resolve into either of the two stable states. If the flip-flop does not resolve in time, failure occurs.

Metastability Resolution Time Constant

The value metastability resolution time constant  can be determined directly using two methods using transistor-level transient circuit simulations with high accuracy [44, 54, 73]. The first method needs a number of measurements within the setup and hold window. Then the time constant  is the slope of input values tDC and output values tCQ within the exponential region of metastability window from Error: Reference source not found, however, this only gives an estimation of the true value of , because true metastability occurs within tens or less of femto-seconds time difference between the edge of the data signal and the clock edge and should be time stepped at least at 10fs or less [44, 54]. The slope of the exponential region is a semi-log slope, and can be written asAn alternative method is a direct method [44, 54] to find the true value of . First, the latch of interest is shorted by a controlled switch and an offset d. c. voltage source between the latch nodes (see Error: Reference source not found), which forces the latch to be in deep metastability. Then, the switch is opened at time t0 to let the latch node voltages diverge away, one to VDD, while the other one to ground. The resultant value of  is the slope of the differential voltage between nodes VA-B during the resolution, using the following equation: This method is effective if the latch is symmetric. In the case of unsymmetrical latch, then the offset voltage must be varied to find the balance point of the latch, before which the latch node A, for instance, would resolve VDD and after which to ground. This requires a binary search.

Metastability window Tw

There are a number of definitions of Tw in the literature [44, 54, 74-76]. In the context of using flip-flops as synchronizers, we could define Tw as the region where metastability may occur when the setup and hold times are violated, see Error: Reference source not found. We could say Tw and the setup plus hold time window are related, and they are good approximation of the actual Tw region. Nevertheless, the concept of the metastability window is applied differently in [75], which is denoted for the asymptotic width of the metastability region (cf. failure region in Error: Reference source not found), while the term ” metastability window” was referred to the equivalent to , the former meaning will be used throughout this work. In general, the metastability window is considered narrower than the setup to hold region [74], and using the value of setup plus hold time () instead of the metastability window to compute the reliability will produce a reserved value of MTBF than with Tw. However, the reserved value is considered good practice, based on (Error: Reference source not found), if an acceptable value of is achieved; then the actual value will definitely be longer.

Power and Energy

The main sources of power dissipation in digital circuits are short circuit current, switching current and leakage currents. Static power consumption is mainly due to leakage currents in transistors and it is computed as the total leakage current times . The expression for dynamic power dissipation , given in Equation (Error: Reference source not found), represents power consumed during switching activity and depends on the supply voltage , frequency of operation f, probability of data switching  and the effective capacitance of the circuit . This equation highlights the trade-offs between speed and power, as they are important in high-performance and low-power applications. To determine the optimum clock frequency and power consumption, the term power-delay product (PDP) is used, where PDP = PDynamic tDQ, which is the energy spent per switching event in a flip-flop. A rather better term to identify the optimum speed and energy consumption is ‘ energy-delay product’ (EDP) which is defined in Equation (Error: Reference source not found). Both PDP and EDP are quality metrics for digital circuits. Dynamic power can be simulated using transient analysis and measurements of currents taken during a number of triggering cycles with data signal changing probability of switching , followed by taking take the average of total current per cycle and multiplying it by VDD. For example, assuming  = 50% over 4 cycles, then let data signal to change two times only. It is important to isolate static power from input driving power, when measuring the required power for clock or data to drive an input terminal. One way of doing this is to place two identical inverters before each input signal, where one of them is only connected to the circuit input terminal capacitance while the other is connected to an open circuit. Then both IDD currents are measured through the inverters and the difference is taken to account for the driving current. The average input terminal power is (IDD-in – IDD0)VDD.

Variability Analysis

Variations can be modelled as uniform or normal Gaussian statistical distributions. Supply voltage and temperature variations can be modelled using uniform distribution, which specifies an equal probability that all samples will work within specified parameter limits, for example ±10% variation around nominal VDD. The effects of VDD variations and ambient temperature can be simulated using the parametric analysis available in Cadence Virtuoso Spectre circuit simulator [77]. A normal distribution is specified around the population mean value (µ) and its standard deviation (). A normal distribution with variation of one standard deviation (1) around the mean incorporates 68. 8% only of the whole population, whereas 2 and 3 include 95. 4% and 99. 7%, respectively. In general, using variation of at least three  must be accounted for, whereas with future technology challenges at least six-sigma variations analysis will be considered. Process variations are usually modelled as a normal distribution with three standard-deviations around the mean. There are two traditional methods for investigating process-variability tolerance in analog simulation in SPICE-like environments; corner analysis and Monte Carlo analysis.

Corner Analysis

The first method is corner analysis, which is a traditional worst-case model that categorizes all physical and environmental variations into three levels: typical, fast and slow. For process variations, there are five models for combined PMOS and NMOS levels. For voltage supply variations, the three levels refer to nominal VDD for typical corner, 0. 9×VDD for slow corner and 1. 1×VDD for fast corner. For temperature variations, the three levels refer to room temperature (27) for typical corner, cold temperatures (0 or 40) for slow corner and cold temperatures (70 or 125) for fast corner. The combination of all these PVT corners creates a total of 81 corners; however, not all of them are needed. Each corner is effective to test a particular condition. For instance, This method is a straightforward and computationally efficient tool; however, as variations become more significant, it has low accuracy and does not represent all samples and the yield [14].

Monte Carlo Analysis

An alternative approach is the Monte Carlo analog statistical analysis. It provides an accurate and statistical representation of the performance, but its computational efficiency reduces with the increase of the circuit’s size and number of samples required. It randomizes process parameters according to their offsets in technology model files within a specified number of standard deviations. The Spectre simulator [77] is supplied with both tools. In this work, the Monte Carlo method is used when simulating small cells and circuits, and the Corner method will account for worst case scenarios. Before using either of them, the right technology files must be linked with this analysis. In Monte Carlo, the sigma value, population sample size and type of variation must be defined [77].

Cumulative Simulations

Simulating process variability effects on timing and power is straightforward and similar to methods discussed in previous subsections, except in the case of simulating setup/hold times or metastability windows. Due to the discontinuous failure region in Error: Reference source not found, a large sample fails in the Monte-Carlo analysis, which increases the difficulty of producing enough data about the exponential region to find those parameters. Instead of depending on the normal distribution curve, it is easier to produced cumulative distribution (CDF) plot from a series of Monte-Carlo simulations for different values of tDC . We monitor this by measuring the Q voltage after clock-to-Q time to give stable output. For a number ‘ S’ of Monte-Carlo simulations, there are ‘ N’ simulations accepted with ‘ SN’ rejected. Two bins would be sufficient to record this information. Then, the simulation should be repeated with a different tDC, and recording the value of N. At the end, plot the predefined D-to-clock (that is setup time or hold time) against N, which gives a CDF plot of normal distributions similar to Error: Reference source not found, where N of 50% is considered the mean, and the positive and negative standard deviation at N= 15% and N= 85%. Using this method, variation of setup/hold and window time can be measured and presented statistically. During mismatch variation analysis, the value of  should not be simulated using the switch method (Error: Reference source not found). It is better to produce an input-time difference (tDC or tin) against output-time delay (tCQ or tout) curve, similar to Error: Reference source not found, and then measure  using (Error: Reference source not found), and time metrics.

Thanks for Voting!
The history of metastability economics essay. Page 1
The history of metastability economics essay. Page 2
The history of metastability economics essay. Page 3
The history of metastability economics essay. Page 4
The history of metastability economics essay. Page 5
The history of metastability economics essay. Page 6
The history of metastability economics essay. Page 7
The history of metastability economics essay. Page 8
The history of metastability economics essay. Page 9

The paper "The history of metastability economics essay" was written by a real student and voluntarily submitted to this database. You can use this work as a sample in order to gain inspiration or start the research for your own writing. You aren't allowed to use any part of this example without properly citing it first.

If you are the author of this paper and don't want it to be used on EduPony, contact us for its removal.

Ask for Removal

Cite this Essay


EduPony. (2022) 'The history of metastability economics essay'. 12 November.


EduPony. (2022, November 12). The history of metastability economics essay. Retrieved from https://edupony.com/the-history-of-metastability-economics-essay/


EduPony. 2022. "The history of metastability economics essay." November 12, 2022. https://edupony.com/the-history-of-metastability-economics-essay/.

1. EduPony. "The history of metastability economics essay." November 12, 2022. https://edupony.com/the-history-of-metastability-economics-essay/.


EduPony. "The history of metastability economics essay." November 12, 2022. https://edupony.com/the-history-of-metastability-economics-essay/.

Work Cited

"The history of metastability economics essay." EduPony, 12 Nov. 2022, edupony.com/the-history-of-metastability-economics-essay/.

Contact EduPony

If you have any suggestions on how to improve The history of metastability economics essay, please do not hesitate to contact us. We want to know more: [email protected]