当前位置:文档之家› High Throughput and Low Power Reed Solomon Decoder for Ultra Wide Band

High Throughput and Low Power Reed Solomon Decoder for Ultra Wide Band

High Throughput and Low Power Reed Solomon Decoder for Ultra Wide Band
High Throughput and Low Power Reed Solomon Decoder for Ultra Wide Band

High Throughput and Low Power Reed Solomon Decoder for

Ultra Wide Band

A.Kumar;S.Sawitzki

akakumar@https://www.doczj.com/doc/0a8675543.html,

Abstract

Reed Solomon(RS)codes have been widely used in a variety of communication systems.

Continual demand for ever higher data rates makes it necessary to devise very high-speed

implementations of RS decoders.In this paper,a uniform comparison was drawn for various

algorithms and architectures proposed in the literature,which helped in selecting the appro-

priate architecture for the intended application.Dual-line architecture of modi?ed Berlekamp

Massey algorithm was chosen for the?nal https://www.doczj.com/doc/0a8675543.html,ing PcCMOS12corelib the area of the

design is and a throughput of1.6Gbps.The design dissipates only17mW of power

in the worst case,including memory,when operating at1.0Gbps data rate.

1Motivation

Reed Solomon(RS)codes have been widely used in a variety of communication systems such as space communication link,digital subscriber loops and wireless systems,as well as in networking communications and magnetic and data storage systems.Continual demand for ever higher data rates and storage capacity makes it necessary to devise very high-speed implementations of RS decoders.Newer and faster implementations of the decoder are being developed and implemented.

A number of algorithms are available and this often makes it dif?cult to determine the best choice due to the number of variables and trade-offs available.Therefore,before making a good choice for the application a thorough research is needed into the decoders available.

For IEEE802.15-03standard proposal(commonly known as UWB)in particular,very high data rates for transmission are needed.According to the current standard,the data rate for UWB will be as high as480Mbps.Since the standard is also meant for portable devices,power con-sumption is of the prime concern,and at the same time the silicon area should be kept as low as possible.As such,a low power and high throughput codec is needed for UWB standard.Reed Solomon is seen as a promising codec for such a standard.

2Introduction to Reed Solomon

Reed Solomon codes are perhaps the most commonly used in all forms of transmission and data storage for forward error correction(FEC).The basic idea of FEC is to add redundancy at the end of the messages systematically so as to enable the retrieval of messages correctly despite errors in the received sequences.This eliminates the need of retransmission of messages over a noisy channel.RS codes are a subset of Bose-Chaudhuri-Hocquenghem(BCH)codes and are linear block codes.[1]is one of the best references for RS Codes.

An code implies that the encoder takes in symbols and adds parity symbols to make it a symbol code word.Each symbol is at least of bits,where.Conversely, the longest length of code word for a given bit-size,is.For example,

code takes in239symbols and adds16parity symbols to make255symbols overall of8bits each. Figure1shows an example of a systematic RS code word.It is called systematic code word as the input symbols are left unchanged and only the parity symbols are appended to it.

Figure1:A typical RS code word

Reed Solomon codes are best for burst errors.If the code is not meant for erasures,the code can correct errors in up to symbols where.A symbol has an error if at least one bit is wrong.Thus,can correct errors in up to8symbols or50continuous bit errors.It is also interesting to see,that the hardware required is proportional to the error correction capability of the system and not the actual code word length as such.

When a code word is received at the receiver,it is often not the same as the one transmitted, since noise in the channel introduces errors in the system.Let us say if is the received code word,we have

(1)

where is the original codeword and is the error introduced in the system.The aim of the decoder is to?nd the vector and then subtract it from to recover original code word transmitted.It should be added that there are two aspects of decoding-error detection and error correction.As mentioned before,the error can only be corrected if there are fewer than or equal to errors.However,the Reed Solomon algorithm still allows one to detect if there are more than errors.In such cases,the code word is declared as uncorrectable.

The basic decoder structure is shown in Figure2.A detailed explanation on Reed Solomon decoders can be found in[1]and[2].Decoder essentially consists of four modules.The?rst module computes the syndrome polynomial from the received sequence.This is used to solve a key equation in the second block,which generates two polynomials for determining the location and value of these errors in the received code word.The next block of Chien search uses the Error Locator Polynomial obtained from the second block to compute the error location,while the fourth block employs Forney algorithm to determine the value of error occurred.The correction block merely adds the values obtained from the output of the Forney block and the FIFO block.Please note that in Galois arithmetic,addition and subtraction are equivalent.

3Channel Model

Before we proceed to the actual decoder implementation,it is important to look at the channel model itself.Since UWB(Ultra Wide Band)is not very well explored yet,it is important to analyse how the channel would behave at the frequency and the data rate under consideration.

Figure2:Decoder?ow

One of the most common models used for modelling transmission over land mobile channels

is

the Gilbert-Elliott model.In this model a channel can be either in a good state or a bad state depending on the signal-to-noise ratio(SNR)at the receiver.For different states,the probability of error is different.In[3],Ahlin presented a way to match the parameters of the GE model to the land mobile channel,an approach that was generalized in[4]to a Markov model with more than two states.

Figure3:The Gilbert-Elliott Channel Model.

Figure3shows the GE Channel Model.Two states are shown represented by and indicat-ing the good and the bad state respectively.Further,the transition probability from the good state to the bad state is shown as and from the bad to the good state as.The probability for error in state and is denoted by and respectively.A detailed analysis can be found in[5] and[6].

3.1Simulation

Following were the parameters set for the simulation of the Ultra Wide Band channel: carrier frequency=4.0GHz

information rate=480Mbps

Two sets of simulation were run for different threshold reading.The threshold here signi?es the SNR level at which the channel changes states.The?rst set was with the threshold set to5dB lower than the average SNR and the other with10dB less than the average.Due to the very high data bit rate involved the transition probability is very small.Therefore,channel transition become very rare events,and simulations determined the error probabilities for codewords beginning in a certain state.These were then weighted by the steady state probability of the corresponding state and added together to obtain the overall probability rate.Two measures,the bit error rate and the

0.0001

0.0010.01

0.1

1214161820222426

E r r o r R a t e

SNR

Error Rate v/s SNR

Symbol Error Rate for 5dB

Bit Error Rate for 5dB Symbol Error Rate for 10dB

Bit Error Rate for 10dB

Figure 4:The symbol error rate and bit error rates for different thresholds.

symbol error rate are computed and plotted.The simulation was run for 10,000codewords to get a good estimate for each state.Mathematica software was used to solve the complex mathematical equations and obtain the channel model parameters for the physical quantities under consideration.

3.2Simulation Results

As can be seen from the Figure 4,the error probabilities decrease with increase in SNR as ex-pected.The ?gure shows the symbol and the bit error probabilities observed.As expected the error rates follow a linear relationship with the increasing SNR on the logarithmic scale.We no-tice that around 20dB average SNR for both the thresholds,the symbol error rate is about 0.02,which corresponds to an average of 5symbol errors in a code word of 255symbols.From the results,an error correction capability of 8is seen as a good choice,as when the SNR is above 20dB,the likelihood of more than 8errors in a codeword of 255is very low.

4Architecture Design Options

Having decided on the codeword,investigation was carried out to determine appropriate algorithm and architecture.Figure 5shows the various architectures available.Table 1shows the hardware requirements of computational elements used in various architectures.Estimates have been made from the ?gures drawn in the papers when actual counts could not be obtained for any architecture.It should be noted that this is only the estimate of computational elements and,therefore,additional hardware will be needed for control overhead.Total latency of the various blocks will determine the size of FIFO.

Figure5:Design Space Exploration

Architecture Blocks Multipliers Latches Critical Path Delay

Syndrome Computation[7]2t12

Total2t4t Mul+Add

x1

2xt2t n/x

Divider Block2t12

Multiply Block t13

Total(Estimates)3t7t

Actual[10]3t+114t+6ROM+2Mul+Add+2Mux Degree Computation Block2t07

Polynomial Arithmetic Block2t419

Total(Estimates)8t52t

Actual[10]8t78t+4M u l+A d d+M u x

112t(t+1)

Modi?ed BerleKamp Massey

142t(2t+2)

222t(t+1)

t t2t

2t2t3t+1

3t+13t+12t

2t2t+24

4.1Design Decisions

In order to choose a good architecture for the application,various things have to be taken into

account.

Gate count:Determines the silicon area to be used for development.A one time production cost but can be critical if it is too high.

Latency:Latency is de?ned as the delay between the received code word and the corre-sponding decoded code word.The lower the latency,the smaller is the FIFO buffer size

required and therefore,it also determines the silicon area to a large extent.

Critical path delay:It determines the minimum clock period,i.e.maximum frequency that the system can be operated at.

Table1shows a summary of all the above mentioned parameters.For our intended UWB

application,speed is of prime concern as it has to be able to support data rates as high as480

Mbps,and perhaps even1Gbps in the near future.At the same time,power has to be kept low,

as it is to be used in portable devices as well.This implies that the active hardware at any time

should be kept low.Also,the overall latency and gate count of computational elements should be

low since that would determine the total silicon area of the design.

4.1.1Key Equation Solver

Reformulated inversion-less and dual line implementation of the modi?ed Berlekamp Massey have

the smallest critical path delay among all the alternatives of the Key Equation Solver.When com-

paring inversion-less and dual-line implementation,dual line is a good compromise in latency and

computational elements needed.The latency is one of the lowest and it has the least critical path

delay of all the architectures summarized.Thus,dual-line implementation of the BM algorithm

was chosen for the key-equation solver.Another bene?t of this architecture is that the design is

very regular and hence easy to implement.

4.1.2RS Code

As we can see from Table1,the hardware requirement for the entire block is a function of,the

error correction capability,and the latency is a function of both and.Thus,while we want to

have a code with high error correction capability,we can not have a very high value of as the

hardware needed is proportional to it.The value of determines the bit-width of the symbol and

therefore the hardware needed,but only logarithmically.However,one would want to have a value

of,to derive maximum bene?t out of the hardware.The value of is often chosen to be a power of2in order to maximise the hardware utilised in design.Taking into account the results

of Channel Model Simulation is chosen,since it has an error correction capability

of8.

4.2Highlights

Table2shows the various parameters for choosing dual line architecture with

and.The overall critical path delay is hence Mul+Add.

Architecture Adders Muxes Latency

Syndrome Computation2t2t n

Dual-line2t2t3t+1

Chien/Forney2t2t+24

Total6t6t+23t+n+5

For Parameters above4850284

Table2:Summary of hardware utilization for Dual-line architecture

5Design Flow

The?rst step was to develop a C-model for the decoder.’Gcc’compiler was used to compile the code and to check if the code worked correctly.Output of each intermediate stage was compared with the expected output according to the algorithm with the aid of an example.

Once the algorithm was fully developed and tested in C,VHDL-code was developed.The VHDL code was structured such so it could be easily synthesized.A wrapper class was written around it,in order to test it.This VHDL code was compiled and tested using Cadence tools.’Ncsim’was used to simulate the system and generate the output stream for the same input tests as were used for testing C code.The output stream from VHDL and C were then compared.

When this output was found to be matched for various input test cases,synthesis experiments were started.Ambit from Cadence was used to analyse the hardware usage and frequency of operation after various optimisation settings.

The design?ow needed for veri?cation of synthesized design and power estimation has been explained in Figure6.As shown in the?gure the core VHDL modules were optimised and synthe-sized using ambit.The synthesized model was written out into a verilog netlist using ambit itself. Once the netlist was obtained,it was compiled using ncvlog into the work library together with the technology library.The library used was for the same technology as the one used for synthesis. As can be seen,the wrapper modules were actually written in VHDL,while the compiled core was from the verilog.Thus,to allow interaction between the two,the top interface of the work library,was extracted into a VHDL?le and then compiled into the work library.This was done using ncshell and ncvhdl respectively.This being done,the wrapper modules were compiled into the work library.

From this point onwards,two approaches were used.Ncelab and ncsim were used purely for simulating the synthesized design,and dncelab and dncsim were used to obtain power esti-mate,which were essentially the same tools,but included the DIESEL routines for estimating the power dissipated in the design.Diesel is an internal tool developed within Philips which estimates the power for the simulated design,and hence the accuracy of the results depends on the input provided.

6Results

This section covers the results of various synthesis experiments conducted.Resource utilization, timing analysis and the power consumption were used as benchmarking parameters.

Figure6:Design?ow for design veri?cation and estimation of power

6.1Area Analysis

Ambit was run with the libraries PcCMOS12corelib and PcCMOS18corelib.The silicon area required was analysed for various timing constraints.A comparison for area of the decoder is shown in Table3.This table shows the area requirement when the constraint was set to5ns, which can support200MHz frequency,i.e. 1.6Gbps.The total number of design cells used, including the memory,were12,768and12,613for PcCMOS18corelib and PcCMOS12corelib respectively.

Module CMOS12CMOS18

Chien

FIFO

Forney

Gen eep

Gen

view

Table3:Resource utilization for the decoder in CMOS12

6.2Power Analysis

The power estimates provided in this section are for design operation at125MHz,which translates to data rate of1Gbps.

6.2.1Variation With Number of Errors

Figure 7shows the variation of power with the number of errors found in the codeword for PcC-MOS12corelib .As can be seen from the graph obtained,the power dissipated for the FIFO and syndrome computation block is independent of the number of errors as expected.For the block that computes the ELP (Error Locator Polynomial)and EEP (Error Evaluator Polynomial),it is clearly seen that the power dissipated increases linearly with the number of errors.The Chien search block also shows a linear increase in the power dissipated.

500

1000

1500

2000

2500

3000

012

345678

P o w e r D i s s i p a t e d (μW )

Number of Errors

Power Dissipation versus number of errors in codeword

Figure 7:Graph showing variation of power dissipated with number of errors for different mod-ules.

The behaviour of Forney evaluator is a bit different from the other modules.We see that the power dissipated for the codeword with an even number of errors is not signi?cantly larger to the one with the previous number of errors.The reason lies in the fact that the degree of EEP for codeword with one error is often the same as the one with two errors,and so on and so forth.However,as a general rule,there is still an increase in the power dissipation,because of some computation that is done for each error found.6.2.2

Distribution of Power in Different Modules

Figure 8shows a distribution of power when there are maximum number of errors correctable in the received code word,while Figure 9shows the distribution when the code word is received intact.As can be seen,in the case of no errors,bulk of the power is consumed in computing syndromes,apart from the memory.In the event of maximum errors detected,the Forney block consumes the maximum power.

7Benchmarking

Please note that for all the designs code has been used for benchmarking.The de-sign using modi?ed Euclidean Algorithm is very hardware intensive.The design proposed in[7] uses roughly115K gates for CMOS techology operating at6Gbps excluding memory. The proposed design only uses12K cells including memory in both and tech-nology.The results are better even when they are normalised for throughput and technology.The latency of the design is only284cycles when compared to355cycles in[7].

In terms of power,a design was proposed by Chang in[9]for low power.In that design, 62mW of power is used in the best case,including memory,using CMOS technology, and100mW are consumed in the worst case.In our design,only17mW of power is used in the worst case using technology.The area of the chip proposed in[9]using CMOS technology is,while the area of the proposed design is with technology. 8Conclusions

A uniform comparison was drawn for various algorithms that have been proposed in literature. This helped in selecting the appropriate architecture for the intended application.Modi?ed Berle-kamp Massey algorithm was chosen for the VHDL implementation.Dual line architecture was used,which is as fast as serial and has low latency as that of a parallel approach.

The decoder implemented is capable of running at200MHz in ASIC implementation,which translates to1.6Gbps and requires only about12K design cells and an area of with CMOS12technology.The system has a latency of only284cycles for RS(255,239)code.The power dissipated in the worst case is17mW including the memory block when operating at1Gbps data rate.

References

[1]S.B.Wicker and V.K.Bhargava;Reed Solomon Codes and Their Applications,Piscat-

away,NJ:IEEE Press,1994.

[2]Richard E.Blahut;Theory and Practice of Error Control Codes,Addison-Wesley,1983.

[3]L.Ahlin;Coding methods for the mobile radio channel Nordic Seminar on Digital Land

Mobile Communications,1985.

[4]H.S.Wang and N.Moayeri;Finite-state Markov channel-A useful model for radio

communication channels IEEE Transaction on Vehicular Technology,1995.

[5]L.Wilhemsson and https://www.doczj.com/doc/0a8675543.html,stein;On the effect of imperfect interleaving for the Gilbert-

Elliott channel IEEE Transactions on Communications,1999.

[6]G.Sharma,A.Dholakia,and A.Hassan;Simulation of error trapping decoders on a

fading channel IEEE Transaction on Vehicular Technology,1996.

[7]Hanho Lee;High-speed VLSI architecture for parallel Reed-Solomon decoder IEEE

transactions on VLSI Systems2003.

[8]H.Lee,M.L.Yu,and L.Song;VLSI design of Reed-Solomon decoder architectures

IEEE International Symposium on Circuits and Systems2000.

[9]H.C.Chang,C.C.Lin and C.Y.Lee;A low-power Reed-Solomon decoder for STM-16

optical communications IEEE Asia-Paci?c Conference on ASIC2002

[10]H.Lee;An area-ef?cient Euclidean algorithm block for Reed-Solomon decoder IEEE

Computer Society Annual Symposium on VLSI,2003

[11]H.C.Chang and C.Y.Lee;An area-ef?cient architecture for Reed-Solomon decoder us-

ing the inversion less decomposed Euclidean algorithm IEEE International Symposium on Circuits and Systems2001.

[12]H.C.Chang and C.B.Shung;A(208,192;8)Reed-Solomon decoder for DVD applica-

tion IEEE International Conference on Communications,1998.

[13]H.J.Kang and I.C.Park;A high-speed and low-latency Reed-Solomon decoder based

on a dual-line structure IEEE International Conference on Acoustics,Speech,and Signal Processing,2002

[14]D.V.Sarwate and N.R.Shanbhag;High-speed architectures for Reed-Solomon de-

coders,IEEE transactions on VLSI Systems2001.

(完整版)LTE系统峰值速率的计算

LTE系统峰值速率的计算 我们常听到” LT网络可达到峰值速率100M、150M、300M ,发展到LTE-A更是可以达到 1Gbps “等说法,但是这些速率的达成究竟受哪些因素的影响且如何计算呢?为了更好的学习峰值速率计算,我们可以带着下面的问题来一起阅读: 1、LTE系统中,峰值速率受哪些因素影响? 2、FDD-LTE系统中,Cat3和Cat4,上下行峰值速率各为多少? 3、T D-LTE系统中,以时隙配比3:1、特殊子帧配比10:2:2为例,Cat3、Cat4上下行峰值速率各为多少? 3、LTE-A ( LTE Advaneed要实现IGbps的目标峰值速率,需要采用哪些技术? 影响峰值速率的因素有哪些? 影响峰值速率的因素有很多,包括: 1. 双工方式——FDD、TDD FDD-LTE为频分双工,即上、下行采用不同的频率发送;而TD-LTE采用时分双工,上、下行 共享频率,采用不同的时隙发送。 因此如果采用相同的带宽和同样的终端类型,FDD-LTE能达到更高的峰值速率。 2. 载波带宽 LTE网络采用5MHz、10MHz、15MHz、20MHz等不同的频率资源,能达到的峰值速率不同。 3. 上行/ 下行 上行的业务需求本就不及下行,因此系统设计的时候也考虑“下行速率高些、上行速率低些” 的原则,实际达到的效果也是这样的。 4. UE能力级 即终端类型的影响,Cat3和Cat4是常见的终端类型,FDD-LTE系统中,下行峰值速率分别能达到100Mbps和150Mbps,上行都只能支持最高16QAM的调制方式,上行最高速率50Mbps。 5. TD-LTE系统中的上下行时隙配比、特殊子帧配比 不同的上下行时隙配比以及特殊时隙配比,会影响TD-LTE系统中的峰值速率水平。 上下行时隙配比有1:3和2:2等方式,特殊时隙配比也有3:9:2和10:2:2等方式。考虑尽量提升下行速率,国内外目前最常用的是DL:UL=3:1、特殊时隙配比10:2:2这种配置。 6. 天线数、MIMO 配置 Cat4 支持2*2MIMO ,最高支持双流空间复用,下行峰值速率可达150Mbps;Cat5 支持 4*4MIMO ,最高支持四层空间复用,下行峰值速率可达300Mbps。 7. 控制信道开销 计算峰值速率还要考虑系统开销,即控制信道资源占比。实际系统中,控制信道开销在20~30% 的水平内波动。 总之,有很多因素影响所谓的“峰值速率”,所以提到峰值速率的时候,要说明是在什么制式下、采用了多少带宽、在什么终端、什么方向、什么配置情况下达到的速率。 下行峰值速率的计算: 计算峰值速率一般采用两种方法: 第一种:是从物理资源微观入手,计算多少时间内(一般采用一个TTI或者一个无线帧)传 多少比特流量,得到速率; 另一种:是直接查某种UE类型在一个TTI (LTE系统为1ms)内能够传输的最大传输块,得到速率。

LTE计算汇总

如对你有帮助,请购买下载打赏,谢谢! 1.RSRP及RSRQ计算 RSRP=-140+RsrpResult(dBm); ●-44<=RSRP<-140dbm ●0<= RsrpResult<=97 下行解调门限:18.2dBm来计算的话,下行支持的最小RSRP为18.2-130.8= -112.6 下行解调门限:上行支持的最小RSRP为23-126.44= -103.44dBm RSRQ=-20+1/2RsrqResult(dB) RSRQ=N×RSRP/(E-UTRA carrier RSSI),即RSRQ = 10log10(N) + UE所处位置接收到主服务小区的RSRP – RSSI。 RSRQ=20+RSRP – RSSI 2.W及dBm换算 “1个基准”:30dBm=1W “2个原则”: 1)+3dBm,功率乘2倍;-3dBm,功率乘1/2 33dBm=30dBm+3dBm=1W× 2=2W 27dBm=30dBm-3dBm=1W× 1/2=0.5W 2)+10dBm,功率乘10倍;-10dBm,功率乘1/10 40dBm=30dBm+10dBm=1W× 10=10W 20dBm=30dBm-10dBm=1W× 0.1=0.1W 3.功率计算 其中max transmissionpower = 43dBm 等效于20W Partofsectorpower=100(%) ; confOutputpower=20(W) Sectorpower=20(W) 需确保Sectorpower=confOutputpower*Partofsectorpower*% 如Partofsectorpower=50(%) ; confOutputpower=40(W) Sectorpower(20W)=confOutputpower(40W) *Partofsectorpower(50%) 4.参考信号接收功率计算 RSRP功率=RU输出总功率-10lg(12*RB个数) , 如果是单端口20W的RU,那么可以推算出 RSRP功率为43-10lg1200=12.2dBm. 1)A类符号指整个OFDM符号子载波上没有RS符号,位于时隙的索引为1、2、3、5、6

通信人才网-LTE峰值速率的计算详解

LTE系统峰值速率的计算 我们常听到”LTE网络可达到峰值速率100M、150M、300M,发展到LTE-A 更是可以达到1Gbps“等说法,但是这些速率的达成究竟受哪些因素的影响且如何计算呢? 为了更好的学习峰值速率计算,我们可以带着下面的问题来一起阅读: 1、LTE系统中,峰值速率受哪些因素影响? 2、FDD-LTE系统中,Cat3和Cat4,上下行峰值速率各为多少? 3、TD-LTE系统中,以时隙配比3:1、特殊子帧配比10:2:2为例,Cat3、Cat4上下行峰值速率各为多少? 3、LTE-A(LTE Advanced)要实现1Gbps的目标峰值速率,需要采用哪些技术? 影响峰值速率的因素有哪些? 影响峰值速率的因素有很多,包括: 1. 双工方式——FDD、TDD FDD-LTE为频分双工,即上、下行采用不同的频率发送;而TD-LTE采用时 分双工,上、下行共享频率,采用不同的时隙发送。 因此如果采用相同的带宽和同样的终端类型,FDD-LTE能达到更高的峰值速率。 2. 载波带宽 LTE网络采用5MHz、10MHz、15MHz、20MHz等不同的频率资源,能达到的峰值速率不同。 3. 上行/下行 上行的业务需求本就不及下行,因此系统设计的时候也考虑“下行速率高些、上行速率低些”的原则,实际达到的效果也是这样的。 4. UE能力级 即终端类型的影响,Cat3和Cat4是常见的终端类型,FDD-LTE系统中,下行峰值速率分别能达到100Mbps和150Mbps,上行都只能支持最高16QAM的调制方式,上行最高速率50Mbps。 5. TD-LTE系统中的上下行时隙配比、特殊子帧配比

LTE计算汇总

1.RSRP及RSRQ计算 RSRP=-140+RsrpResult(dBm); ●-44<=RSRP<-140dbm ●0<= RsrpResult<=97 下行解调门限:18.2dBm来计算的话,下行支持的最小RSRP为18.2-130.8= -112.6下行解调门限:上行支持的最小RSRP为23-126.44= -103.44dBm RSRQ=-20+1/2RsrqResult(dB) RSRQ=N×RSRP/(E-UTRA carrier RSSI),即RSRQ = 10log10(N) + UE所处位置接收到主服务小区的RSRP – RSSI。 RSRQ=20+RSRP – RSSI 2.W及dBm换算 “1个基准”:30dBm=1W “2个原则”: 1)+3dBm,功率乘2倍;-3dBm,功率乘1/2 33dBm=30dBm+3dBm=1W× 2=2W 27dBm=30dBm-3dBm=1W× 1/2=0.5W 2)+10dBm,功率乘10倍;-10dBm,功率乘1/10 40dBm=30dBm+10dBm=1W× 10=10W 20dBm=30dBm-10dBm=1W× 0.1=0.1W 3.功率计算 其中max transmissionpower = 43dBm 等效于20W Partofsectorpower=100(%) ; confOutputpower=20(W) Sectorpower=20(W) 需确保Sectorpower=confOutputpower*Partofsectorpower*% 如Partofsectorpower=50(%) ; confOutputpower=40(W) Sectorpower(20W)=confOutputpower(40W) *Partofsectorpower(50%)

LTE速率计算

TD-LTE的最高下行速率计算LTE TDD帧结构

在TDD帧结构中,一个特殊子帧的大小是1ms,就是两个资源模块RB,一个

RB占7个OFDM符号,所以一个特殊子帧占14个OFDM符号,但是不管特 殊子帧内部结构如何变换,其大小都是1ms。 1、计算方法: 根据TD-LTE的帧结构,采用5ms的周期,最大是3个下行子帧+1个上行子帧,另外DwPTS也可以承载下行数据,最多是12个符号。 因此,5ms周期最多可以传3*14+12=54个符号,当使用20M带宽时,有1200个子载波,以最高效的64QAM计算,5ms周期内可传 54*1200*6=0.388 8M比特的数据,也就是最高下行速率为77.76Mbps。注意,这是没有使用MI MO。使用MIMO后,最高下行速率为 155.52Mbps。 当然,大家都知道每个子帧控制信息都占用至少一个符号,因此业务数据最多可占用50个符号,也就是不使用MIMO,最高下行速率为72Mbps;使用MI MO后,最高下行速率为144Mbps。 这还只是粗略计算,因为参考信号以及同步信号都会占用符号的部分或全部,因此最终的最高下行速率低于144Mbps。据中兴宣称,其最高速率为130Mbps。 2 参考信号的占用情况与MIMO是否使用有关。 a. 没有MIMO,每个RB中会分布有8个参考信号,因为第一个符号已经用于控制部分,不用重复计算,因此会占用6个调制符号的位置,也就是每个子帧占用的比特数为: 6*6(64QAM)*4(3下+DwPTS)*100(RB数量)=14.4kb 而1秒有200个子帧,对应速率为2.88Mbps b. 有MIMO,每个RB中会分布有16个参考信号,因为第一个符号已经用于控制部分,不用重复计算,因此会占用12个调制符号的位置,也就是每个子帧占用的比特数为: 12*6(64QAM)*2(MIMO)*4(3下+DwPTS)*100=57.6kb 对应速率为11.52Mbps。 这里有个地方不是很确定,就是DwPTS中参考信号的分布情况,但影响的数量应该不会很大。 3 考虑同步信号信道占用情况 同步信号只占用6个RB,因此每个子帧占用的比特数为: 2(主、从)*12(每RB子载波数)*6(64QAM)*4(3下+DwPTS)*6(R B数量)=3456b 对应速率为0.6912Mbps,如果采用MIMO,对应速率为1.3824Mbps

LTE速率计算

下行峰值速率的计算: 计算峰值速率一般采用两种方法: 第一种:是从物理资源微观入手,计算多少时间内(一般采用一个TTI或者一个无线帧)传多少比特流量,得到速率; 另一种:是直接查某种UE类型在一个TTI(LTE系统为1ms)内能够传输的最大传输块,得到速率。 下面以FDD-LTE为例,分别给出两种方法的举例。 【方法一】 首先给出计算结果: 20MHz带宽情况下,一个TTI内,可以算得最高速率为: 总速率=, 业务信道的速率=201.6*75%≈150Mbps 数字含义: 6:下行最高调制方式为64QAM,1个符号包含6bit信息; 2和7:LTE系统的TTI为1个子帧(时长1ms),包含2个时隙,常规CP下,1个时隙包含7个符号;因此:在一个TTI内,单天线情况下,一个子载波下行最多传输数据6×7×2bit;2:下行采用2×2MIMO,两层空分复用,双流可以传输两路数据; 1200:20MHz带宽包含1200个子载波(100个RB,每个RB含12个子载波) 75%:下行系统开销一般取25%(下行开销包含RS信号(2/21)、 PDCCH/PCFICH/PHICH(4/21)、SCH、BCH等),即下行有效传输数据速率的比例为75%。如果是TD-LTE系统,还要考虑上下行的时隙配比和特殊时隙配比,对下行流量对总流量占比的影响。 如在时隙配比3:1/特殊子帧配比10:2:2的情况下: 一个无线帧内,各子帧依次为DSUDD DSUDD,其中D为下行子帧U为上行子帧,每个子帧包含2个时隙共14个符号,S为特殊子帧,10:2:2的配置,表示DwPTS(Downlink Pilot TimeSlot)、GP(Guard Period)和UpPTS(Uplink Pilot TimeSlot)各占10个、2个和2个符号。那么所有下行符号等效在一个TTI内占的比例为(6*14+2*10)/14*10=74%,如果也粗略考虑75%的控制信道开销,那么TD-LTE系统在3:1/10:2:2的配置下,下行峰值速率可达:201.6*75%*74%≈112Mbps 其他的时隙配比、特殊子帧配比,都可以参考这个方法来计算。 【方法二】 这个方法简单直观很多,如下表,第一列是终端类型1~8(常用3、4) 第二列为一个TTI内传输的最大传输块bit数,那么峰值速率就等于最大传输块大小/传输时间间隔,以Cat3和Cat4为例,峰值吞吐率分别为102048/0.001=102Mbps和 150752/0.001=150Mbps。Cat5因为可以采用了4*4高阶MIMO,4层空分复用在一个TTI 内传299552bit,因此能达到300Mbps的下行峰值速率。 FDD-LTE系统,计算可到此为止,TD-LTE系统需要再根据时隙配比/特殊子帧配比乘上比例,Cat3和Cat4的下行峰值吞吐率分别为75Mbps和111Mbps。 超级啰嗦: 1、Cat3因为最大传输块为102048,所以FDD-LTE中峰值速率最高只能到100Mbps。

LTE速率计算

1、FDD理论计算公式: 一个时隙(0.5ms)内传输7个OFDM符号,即在1ms内传输14个OFDM符号,一个资源块(RB)有12个子载波(即每个OFDM在频域上也就是 15KHZ),所以1ms内(2个RB)的OFDM个数为168个(14*12),它下行采用OFDM技术,每个OFDM包含6个bits,则20M带宽时下行速速为: *<1ms内的OFDM数>*<20M带宽的RB个 数>*<1000ms/s>=6*168*100*1000=100800000bits/s=100Mb 2、TDD理论计算公式: 假设:带宽为20MHZ,TDD配比使用配置为1,即DL:UL:S=4:4:2,特殊时隙配置为DwPTS : Gp : UpPTS=10:2:2,子帧中下行控制信道占用3个符号,传输天线为2。 总10ms周期内,下行子帧有效数为4+10/14*2=5.43 20MHZ带宽下: 每帧中下行符号数为14*12*100*(4+10/14*2)=91200 每帧中下行控制信道所占用的符号数为(3*12-2*2)*100*5.43=17371.4 每帧中下行参考信号数目为16*100*5.43=8685.7 每帧中用于同步的符号数为288 每帧中PBCH符号数为(4*12-2*2)*6=264 则每帧中下行的PDSCH符号数为91200-17371.4-8685.7-288-264=64951 假设采用64QAM,码率为5/6,则速率为: (6*5/6*64951*2)/10ms=64.951Mbits/s 其中6为64 QAM时每符号的比特数,5/6为码率,2为天线数

TDLTE计算题之prach配置计算

1、prachConfigurationIndex计算 TDLTE的PRACH采用格式0,循环周期为10ms,请问 1)子帧配比为配置1的基站的3扇区的prachConfigurationIndex分别是多少及对应的帧内子帧位置(从0开始)? 2)子帧配比为配置2的基站的3扇区的prachConfigurationIndex分别是多少

答:TDD配置1的3扇区的prachConfigurationIndex分别为3/4/5,分别对应3、8、2三个子帧 TDD配置2的3扇区的prachConfigurationIndex分别为3/4/4,分别对应2、7、7三个子帧 解析:TDLTE的PRACH采用格式0,循环周期为10ms,采用格式0即对应第二张表中黄色标示部分,循环周期10ms即每10ms出现1次,对应第二张表中红色部

分,到此部分可得PRACH configuration Index只有3,4,5这3种情况。配置1,对应第一张表中黄色部分,配置2,对应第一张表中红色部分第一张表中的4元符号组代表意义如下: 每一个四元符号组 ) , , , (2 1 RA RA RA RA t t t f用来指示一个特定随机接入资源的时频位置 fra=频率偏移,题目中给的就是从0开始 tra(0)=0,1,2表明prach是在全帧;奇数帧;偶数帧 tra(1)=0,1 表明是在前半子帧上有,还是后半子帧上有tra(2)表明prach上行子帧的序号(第一个从0开始) 上下行子帧配置表 Uplink-downlink configuration Downlink-to-Uplink Switch-point periodicity Subframe number 0123456789 0 5 ms D S U U U D S U U U 1 5 ms D S U U D D S U U D 2 5 ms D S U D D D S U D D 310 ms D S U U U D D D D D 410 ms D S U U D D D D D D 510 ms D S U D D D D D D D 6 5 ms D S U U U D S U U D 为了提高RACH的成功率,3个小区都选不同的配置 配置1下,PRACH configuration Index 3,4,5都符合条件,每个小区一种配置 (0,0,0,1)前半帧上,第二个上行子帧,即3号子帧 (0,0,1,1) 后半帧上,第二个上行子帧,即8号子帧 (0,0,0,0)前半帧上,第一个上行子帧,即2号子帧 配置2下,PRACH configuration Index 3,4符合条件,一个站3个小区两种配置,就可能出现2个小区配置一致的情况,即2/7/7或2/7/2 (0,0,0,0)前半帧上,第一个上行子帧,即2号子帧 (0,0,1,0) 后半帧上,第一个上行子帧,即7号子帧

01-LTE常用计算公式

1 RSRP及RSRQ计算

2 W及dBm换算 dBm是一个表示功率绝对值的值(也可以认为是以1mW功率为基准的一个比值),计算公式为: dBm =10log(功率值/1mw)。 这里将dBm转换为W的口算规律是要先记住“1个基准”和“2个原则”: “1个基准”: 30dBm=1W “2个原则”: 1)+3dBm,功率乘2倍;-3dBm,功率乘1/2 举例: 33dBm=30dBm+3dBm=1W× 2=2W 27dBm=30dBm-3dBm=1W× 1/2=0.5W 2)+10dBm,功率乘10倍;-10dBm,功率乘1/10 举例:

40dBm=30dBm+10dBm=1W× 10=10W 20dBm=30dBm-10dBm=1W× 0.1=0.1W 以上可以简单的记作: 30是基准,等于1W整,互换不算难,口算可完成。加3乘以2,加10乘以10;减3除以2,减10除以10。 几乎所有整数的dBm都可用以上的“1个基准”和“2个原则”转换为W。 例1:44dBm=?W 44dBm=30dBm+10dBm+10dBm-3dBm-3dBm=1W× 10× 10× 1/2× 1/2 =25W 3 功率计算 其中max transmissionpower = 43dBm 等效于20W Partofsectorpower=100(%) ;confOutputpower=20(W) Sectorpower=20(W) 需确保Sectorpower=confOutputpower*Partofsectorpower*% eg: 如Partofsectorpower=50(%) ; confOutputpower=40(W) Sectorpower(20W)=confOutputpower(40W) *Partofsectorpower(50%) 4 参考信号接收功率计算 LTE的RSRP (Reference Signal Receiving Power,参考信号接收功率) 功率,是在某个符号内承载参考信号的所有RE(资源粒子)上接收到的信号功率的平均值,也就是子载波功率,这相当于GSM的BCCH 或CDMA里面的导频功率。对于LTE,一个OFDM子载波是15KHZ,这样只要知道载波带宽,就知道里面有几个子载波,也就能推算RSRP功率了。 举个例子,对于单载波20M带宽的配置而言,里面共有1200个子载波, RSRP功率=RU输出总功率-10lg(12*RB个数) , 如果是单端口20W的RU,那么可以推算出 RSRP功率为43-10lg1200=12.2dBm. 5 上下行频率计算

LTE频点计算方法

电信LTE分到的频段: UL1755-1785MHz DL1850-1880MHz 目前使用的是1765-1780和1860-1875(B3频段) LTE频点计算公式 下行频点计算公式: F DL=F DL_low+0.1(N DL–N Offs-DL) 1867.5=1805+0.1(N DL–1200) 其中F DL为该载频下行频点(所使用频段的中心频率,即1867.5),F DL_low对应频段的最低下行频点(B3频段对应的是1805),N DL为该频段下行频点号(即所求频点),N Offs-DL对应频段的最低下行频点号(B3频段对应的是1200)。对应关系表详见表1。 上行频点计算公式: F UL=F UL_low+0.1(N UL–N Offs-UL) 其中F UL为该载频上行频点,F UL_low对应频段的最低上行频点,N UL为该载频上行频点号,N Offs-UL对应频段的最低上行频点号。对应关系表详见表1 上下行频点号范围:0-65535. 对应关系表详见表1如下:

Table1:E-UTRA channel numbers E-UTRA Operating Band Downlink Uplink F DL_low [MHz] N Offs-DL Range of N DL F UL_low[MHz]N Offs-UL Range of N UL 1211000–59919201800018000–18599 21930600600 119918501860018600–19199 3180512001200–194917101920019200–19949 4211019501950–239917101995019950–20399 586924002400–26498242040020400–20649 687526502650–27498302065020650–20749 7262027502750–344925002075020750–21449 892534503450–37998802145021450–21799 91844.938003800–41491749.92180021800–22149 10211041504150–474917102215022150–22749 111475.947504750–49491427.92275022750–22949 1272950105010–51796992301023010–23179 1374651805180–52797772318023180–23279 1475852805280–53797882328023280–23379… 1773457305730–58497042373023730–23849 1886058505850–59998152385023850–23999 1987560006000–61498302400024000–24149 2079161506150-64498322415024150-24449 211495.964506450–65991447.92445024450–24599 22351066006600-739934102460024600-25399 23218075007500–769920002550025500–25699 24152577007700–80391626.52570025700–26039 25193080408040-868918502604026040-26689 2685986908690–90398142669026690-27039 2785290409040–92098072704027040–27209 2875892109210–96597032721027210–27659 29271796609660–9769N/A … 3319003600036000–3619919003600036000–36199 3420103620036200–3634920103620036200–36349 3518503635036350–3694918503635036350–36949 3619303695036950–3754919303695036950–37549 3719103755037550–3774919103755037550–37749 3825703775037750–3824925703775037750–38249 3918803825038250–3864918803825038250–38649 4023003865038650–3964923003865038650–39649 4124963965039650–4158924963965039650–41589

LTE计算题

简答题: 移动公司准备进行F频段的TD-LTE实验网,要求带宽20MHz,下面是从规范中摘录的有关频带资源的表格, 请回答下面问题: 1.该实验网应该属于规范中定义的那个Band?该band中可以规划几个频点? 2.请计算出所有可能的EARFCN? 答案: 1.该实验网应该属于规范中定义的那个Band?该band中可以规划几个频点? F频段频率资源是1880MHz—1900MHz,根据表格可知,该实验网属于Band39,由于实验网要求20MHz带宽,所以该band只有一个频点可用。 2.请计算出所有可能的EARFCN? F频段20MHz带宽的中心频率是1890MHz,根据下列公式: F DL = F DL_low + 0.1(N DL– N Offs-DL) F UL = F UL_low + 0.1(N UL– N Offs-UL) 由于是TDD系统,所以上面两公式等同于一个公式,带入相关参数: 1890=1880+0.1(EARFCN-38250) 计算可得:EARFCN=38350

1.PDCCH最少占用的bit数?写明计算过程。 答: 72bits(PDCCH至少占用1CCE,包含9个REG,1个REG包含4个RE,所以,此时,PDCCH含符号数为:4*9=36个,PDCCH采用QPSK,所以PDCCH最少占用的bit数为:36*2=72bits ) 1. 一个带宽为20M,其天线端口数为2的TD LTE系统,其参数配置如下所示: cycPrefix= normal (0); tddSpecSubfConf=7 tddFrameConf=1; MaxNrSymPdcch=3; 假设在该小区内用户每10ms内被调度一次,而被调度的用户分布如下: ?10%用户采用1CCE ?20%用户采用2CCE ?30%用户采用4CCE ?40%用户采用8CCE 计算在10ms内可被调度的最大用户数? 解析:从以上小区参数配置可以看出,该小区采用常规CP配置,特殊子帧配置为10:2:2,上下行子帧配置为2:2,PDCCH信道占用3个Symbol。 由于在常规下行子帧中占用前3个Symbol的信道除了PDCCH之外,还有PCFICH,PHICH,RS,如果PCFICH,PHICH,RS占用的资源越少,可分配给PDCCH的资 源就越多,可被调度的用户数就越多。但对于固定的2端口天线系统,PCFICH,RS 是固定的,其在20M带宽(100个PRB)每TTI资源中占用的RE数量为:PCFICH占的RE数=4*4=16REs RS(两端口)占得RE数=4*100=400REs

LTE系统峰值速率的计算

L T E系统峰值速率的计 算 SANY GROUP system office room 【SANYUA16H-

L T E系统峰值速率的计算我们常听到”LTE网络可达到峰值速率100M、150M、300M,发展到LTE-A更是可以达到1Gbps“等说法,但是这些速率的达成究竟受哪些因素的影响且如何计算呢? 为了更好的学习峰值速率计算,我们可以带着下面的问题来一起阅读: 1、LTE系统中,峰值速率受哪些因素影响? 2、FDD-LTE系统中,Cat3和Cat4,上下行峰值速率各为多少? 3、TD-LTE系统中,以时隙配比3:1、特殊子帧配比10:2:2为例,Cat3、Cat4上下行峰值速率各为多少? 3、LTE-A(LTEAdvanced)要实现1Gbps的目标峰值速率,需要采用哪些技术? 影响峰值速率的因素有哪些? 影响峰值速率的因素有很多,包括: 1.双工方式——FDD、TDD FDD-LTE为频分双工,即上、下行采用不同的频率发送;而TD-LTE采用时分双工,上、下行共享频率,采用不同的时隙发送。 因此如果采用相同的带宽和同样的终端类型,FDD-LTE能达到更高的峰值速率。 2.载波带宽 LTE网络采用5MHz、10MHz、15MHz、20MHz等不同的频率资源,能达到的峰值速率不同。

3.上行/下行 上行的业务需求本就不及下行,因此系统设计的时候也考虑“下行速率高些、上行速率低些”的原则,实际达到的效果也是这样的。 4.UE能力级 即终端类型的影响,Cat3和Cat4是常见的终端类型,FDD-LTE系统中,下行峰值速率分别能达到100Mbps和150Mbps,上行都只能支持最高16QAM 的调制方式,上行最高速率50Mbps。 5.TD-LTE系统中的上下行时隙配比、特殊子帧配比 不同的上下行时隙配比以及特殊时隙配比,会影响TD-LTE系统中的峰值速率水平。 上下行时隙配比有1:3和2:2等方式,特殊时隙配比也有3:9:2和10:2:2等方式。考虑尽量提升下行速率,国内外目前最常用的是DL:UL=3:1、特殊时隙配比10:2:2这种配置。 6.天线数、MIMO配置 Cat4支持2*2MIMO,最高支持双流空间复用,下行峰值速率可达 150Mbps;Cat5支持4*4MIMO,最高支持四层空间复用,下行峰值速率可达300Mbps。 7.控制信道开销 计算峰值速率还要考虑系统开销,即控制信道资源占比。实际系统中,控制信道开销在20~30%的水平内波动。

相关主题
相关文档 最新文档