当前位置:文档之家› accurate text localization in images based on svm output scores

accurate text localization in images based on svm output scores

Accurate text localization in images based on SVM output scores

Cheolkon Jung a,*,Qifeng Liu b ,Joongkyu Kim a,*

a School of Information and Communication Engineering,Sunkyunkwan University,300Cheoncheon-dong,Suwon,Kyunggido 440-746,South Korea b

Samsung Advanced Institute of Technology,Yongin,Kyunggido 446-712,Republic of Korea

a r t i c l e i n f o Article history:

Received 15December 2006

Received in revised form 28September 2007Accepted 25November 2008

Keywords:

Text localization SVM output score Text line re?nement Classi?er fusion

a b s t r a c t

In this paper,we propose a new approach for accurate text localization in images based on SVM (support vector machine)output scores.In general,SVM output scores for the veri?cation of text candidates pro-vide a measure of the closeness to the text.Up to the present,most researchers used the score for veri-fying the text candidate region whether it is text or not.However,we use the output score for re?ning the initial localized text lines and selecting the best localization result from the different pyramid levels.By means of the proposed approach,we can obtain more accurate text localization results.Our method has three modules:(1)text candidate detection based on edge-CCA (connected component analysis),(2)text candidate veri?cation based on the classi?er fusion of N-gray (normalized gray intensity)and CGV (con-stant gradient variance),and (3)text line re?nement based on the SVM output score,color distribution and prior geometric knowledge.By means of experiments on a large news database,we demonstrate that our method achieves impressive performance with respect to the accuracy,robustness and ef?ciency.

ó2008Elsevier B.V.All rights reserved.

1.Introduction

Text in images and videos always contains useful information,which can help a machine to understand their content.Therefore,text is very important for the automatic annotation,indexing and parsing of images and videos [1,2,6,12,15].In general,the process-ing of text in images is comprised of three steps:text localization,text segmentation and text recognition.Text localization is used to localize the text lines using rectangular boxes,text segmentation is performed to compute the foreground of the text,and text recog-nition is conducted to convert the segmented text image into plain text [3].In this paper,we focus on text localization in images.Text localization is a dif?cult task,because it suffers from the following uncertainties:(1)background variation;(2)text size,font,color,contrast texture and direction variation;(3)dynamic evolution of text in video;(4)ambiguity between the text and background structure.In addition,text localization is generally time-consuming,as is a challenging task to implement in a real system.

Up to the present,many signi?cant achievements have been made by researchers in the ?eld of text localization.In the early stages of the research in this area [4–9],the methods were compar-atively simple,because the texts were detected and localized based on much heuristic information.Although these methods were very fast,a large number of false alarms inevitably occurred.Recently,

many researchers have focused their attention on the application of pattern classi?cation to text localization [10–15].The machine learning methods for pattern classi?cation are AdaBoost [10],SVM [11,12,15],and NN (neural network)[13].Table 1summarizes the conventional methods of text localization mentioned above.Even if much attention is paid in these conventional methods,there are still some problems.For example,what is the best feature for text localization?How can we localize text accurately in vari-ous images with various text sizes?These problems provide the motivation for our work in this paper.

Based on the above considerations,we propose a new method of accurate text localization using the SVM output score.We use a cascade scheme for text localization,as shown in Fig.1.

Unlike some other methods [12,15],the proposed method incorporates a module of text line re?nement using the SVM out-put score in order to obtain a more accurate text boundary.The main contributions and novelties of our work in this paper are:(1)We propose a new cascade framework for accurate text

localization.We add a much more elaborate module of text line re?nement based on the SVM output score and image similarity measurement.And to select the accurate localiza-tion results from the different pyramid levels,we make use of pyramid composition based on the SVM output score.(2)We propose to fuse the N-gray and CGV SVM classi?ers

together for text candidate veri?cation.In addition,we com-pared the ?ve classic features (gray,N-gray,CGV,DCT (dis-crete cosine transformation)coef?cients,Distmap (distance map)),and concluded that N-gray is the best.

0262-8856/$-see front matter ó2008Elsevier B.V.All rights reserved.doi:10.1016/j.imavis.2008.11.012

*Corresponding authors.Tel.:+82312907199;fax:+82312907687.

E-mail addresses:ckjung@ece.skku.ac.kr (C.Jung),q?iu1976@https://www.doczj.com/doc/4411889036.html, (Q.Liu),jkkim@https://www.doczj.com/doc/4411889036.html, (J.K.Kim).

Image and Vision Computing 27(2009)

1295–1301

Contents lists available at ScienceDirect

Image and Vision Computing

journal homepage:www.else v i e r.c o m /l o c a t e /i m a v i

s

The remainder of this paper is organized as follows.The text candidate detection,text candidate veri?cation and text line re?nement modules are described in Sections 2–4,respectively.Section 5shows some experimental results and corresponding analysis.In Section 6,this paper is concluded and further work is discussed.

2.Text candidate detection

In order to provide a good tradeoff between the performance and processing speed,we ?rst perform text candidate detection.The module of text candidate detection aims at achieving a high re-call rate and low computational cost,and a low precision rate is al-lowed.Therefore,we need to select some simple features,which allow the detection of text.Inspired by this consideration,we ?rst compute the Canny edges and connect them into clusters by means of a morphologic ?lter.Then,the Y -and X -axes projection pro?les of each connected component are analyzed to generate the text candidates.

2.1.Canny edge detection and morphologic ?lter

We do not use other complex techniques (e.g.,wavelet or DCT)for text candidate detection,because we want to achieve a low computational cost and allow a comparatively high rate of false alarms.The Canny algorithm has been successfully used in many ?elds.We use 220and 335as the two threshold values of the Can-ny algorithm.By experiment,we found that the Canny algorithm is not sensitive to these two threshold values.Note that the impor-tant parameters are automatically optimized by our performance evaluation method based on training with the ground truth data-base.Then,a morphologic operation is performed to generate the

binary edge images.We ?rst use a dilation operator with a 15*3template and then an erosion operator with a 5*7template.We perform dilation carefully and erosion as much as possible,in order to avoid connecting two adjacent horizontal text https://www.doczj.com/doc/4411889036.html,puting CCs and projection pro?le analysis

We compute the CCs (connected components)in the binary im-age resulting from the morphologic operation.Our method can ?ll a CC starting from the seed point with the speci?ed color.The con-nectivity is determined by the closeness of the pixel values.The connectivity value is speci?ed as 4,but not 8,because we try to avoid over-connection.We analyze each CC by its Y -(horizontal)and X -(vertical)axis projection pro?les.The basic idea is that,for each CC,the peaks in the Y -axis projection pro?le correspond to potential lines of text.Then,these CC regions will be further ana-lyzed using the X -axis projection pro?les.We refer to this process as ‘‘edge-CCA”.Edge-CCA is different from traditional CCA [5].Tra-ditional CCA is a pure bottom-up method,i.e.,pixels with similar color are grouped into CCs (connected components),and further into text regions.On the other hand,in edge-CCA,the process from pixels to CCs is bottom-up,while the process from CCs to text can-didates is top-down.

3.Text candidate veri?cation

Text candidate veri?cation based on pattern classi?cation has been shown to be a powerful method,because it tries to describe the intrinsic properties of texts.To accomplish this,we chose to use an SVM rather than an NN,because it has been demonstrated that the SVM is better than the NN in text localization [12,15].3.1.Text candidate veri?cation using SVM

SVM is derived from statistical learning theory for pattern rec-ognition.The basic idea of SVM is to implicitly project the input space onto a higher dimensional space where the two classes are more linearly separable.An extensive discussion of SVM can be found in [16].

Given a text candidate,we ?rst normalize it to a height of 15pixels height and corresponding resized weight.Then,we use a sliding 15*15window with each window overlapping 5pixels of the previous window to generate several samples.We use the trained SVM classi?ers to classify each of these samples and fuse the results.The SVM output scores of these samples are averaged.

Table 1

Brief review of methods of text localization.Category

Feature Basic idea

Comments

Heuristic

knowledge-based methods

CCA [4,5]Pixels with similar color are grouped into connected components,and then into text regions

Fast,however it fails when texts are not homogeneous and text parts are not dominant in an image

Edge [6,7]There are high contrast differences between text and background.A morphological ?lter is often used

Fast,however,it fails when background has edge clutter

Corner [8]Texts are rich in corners.Corners are detected and then grouped into text regions

It removes some edge clutters.However,it does not work well when text corners are sparse

Texture [9]

Texts have speci?c texture pattern,so many texture features can be used

Time consuming,and it fails when background has texture clutter Machine leaning-based methods

Gray intensity [11]

Intrinsic patterns can be learned by SVM from gray intensity sub-image,and time cost is decreased by CAMSHIFT

More effective and faster than general texture-based methods,however this may have scale problem due to CAMSHIFT

Edge [12]Text candidates are detected using edge information and then veri?ed by the SVM classi?er with CGV feature.

Effective,however CGV has been shown by experiment to be not better than gray intensity by experiments

Frequency [13]Image is scanned by a NN-classi?er based on wavelet moments Time consuming and wavelet (DCT)feature is not better than gray intensity,as demonstrated by experiment

Feature selection [10,14,15]

Features are selected from a lot of different features for SVM and AdaBoost

Effective,however time consuming (many feature extraction methods have to be performed)

1296

C.Jung et al./Image and Vision Computing 27(2009)1295–1301

If the average is larger than a certain threshold,then the whole text line is regarded as a text line,otherwise it is regarded as a false alarm.We set the threshold asà0.3based on the trade-off be-tween the FAR(false acceptance rate)and FRR(false rejection rate).

In[12],these SVM output scores are averaged with Gaussian weights.This is based on the assumption that the central part of the text line usually contains text.The assumption is not reason-able,because the actual cases are complex and the central part al-ways contains spaces.Therefore,we do not use Gaussian weights. Our method of text candidate veri?cation consists of?ve steps. Step1:Normalization of text candidates to a height of15-pixels.

Step2:Generation of pattern samples by means of a sliding15*15 window.

Step3:Feature extraction(Section3.2).

Step4:SVM prediction based on classi?er fusion(Section3.3). Step5:Final decision by combining SVM output scores.

3.2.Feature extraction

Here,we consider three features:(1)the gray intensity,(2)the normalized gray intensity(N-gray)and(3)the CGV.The reasons we select the gray intensity as a pattern feature are as follows:

(1)The gray intensity is shown in[11]to be the best feature

compared with the wavelet and histogram features.

(2)It does not need complex feature extraction,which is gener-

ally time consuming.

(3)SVM can achieve good generalization performance in high

dimensional space.

(4)It is very dif?cult to de?ne the text pattern.Faces have a

?xed pattern containing the eyes,nose,mouse,etc.,while text strokes vary greatly.So,it would be more reasonable to allow the SVM to automatically?nd the hidden pattern from the gray intensity,rather than force it to operate on a user-de?ned feature set.

To deal with cases with low contrast,we propose to use the nor-malized gray intensity,which is de?ned as

NfesT?efesTàV minT=eV maxàV minT?255:e1Twhere f(s)is the original gray intensity of pixel s,and V max/V min rep-resents the maximum/minimum value of f(s),which is normalized into Nf(s)from range[V min,V max]to[0,255].For classi?er fusion, we choose the CGV as the second feature,since it was shown in [12]to be superior to gray scale spatial derivatives,Distmap,DCT coef?cients,etc.

3.3.SVM classi?er fusion

To improve the classi?cation performance,we fuse the N-gray and CGV features together.Because the SVM output score is the sum of weighted kernel distances between the test sample and support vectors,it is reasonable to fuse these two features in the following way:

f0?P1áf1tP2áf2;e2Twhere f0is the?nal SVM score,P1and P2are the prediction accura-cies,and f1and f2are the SVM scores of the two features, respectively.

3.4.Pyramid decomposition and composition

To handle large variations(from10to150pixels)in the text size,we use the method of pyramid decomposition and composi-tion,in which the source image is decomposed into N(=3)levels with decreasing size.For each pyramid level,we perform text can-didate detection and veri?cation with the same parameters.Then, these results from the different levels are grouped together.The main problem occurs when the same text line is detected together in two or three pyramid levels.In this case,we have to select the best one.To accomplish this,we perform pyramid composition based on the SVM output score.The result with the higher SVM output score is selected.This method is reasonable,because the highest SVM output score means the best localization of text.

4.Text line re?nement

Most other researchers do not pay much attention to text line re?nement,since their methods of text line re?nement are very simple,only being based on some heuristic information,or do not exist at all.However,when we carefully observe the results of the text candidate detection and veri?cation procedure,we?nd that there are a lot of problems.The text line boundaries are determined by the edge-CCA,which is generally not accurate.For example:(1) some text candidates contain too much background,(2)some texts are missed by text candidate detection and(3)some characters are divided into two text line boxes,as shown in Fig.2.These problems will have a detrimental effect on the recognition results.

Therefore,we think that the initial localized text lines should be re?ned in order to solve these three problems.We make use of the SVM output score and color distribution for the text line re?nement. The text line re?nement module consists of text boundary shrink-ing,text boundary combination,and text boundary extension[17].

4.1.Text boundary shrinking

For each initial text line,we used a sliding window to generate some samples.These samples are veri?ed by an SVM classi?er which was trained in advance.The output score of the SVM can provide a quantitative measure of the closeness to the text.If the score is larger than a certain threshold,the corresponding sub-im-age will be accepted as text.Based on the SVM output scores,we get rid of the non-text parts on the right and left sides of the input sub-image.Thus,the initial text line shrinks to the center position of the text region.Fig.3is a good reference for understanding

this

Fig.3.Text boundary

shrinking.

Fig.2.Problems of initial localization results.

C.Jung et al./Image and Vision Computing27(2009)1295–13011297

method.The green boxes are those in which the SVM output score is smaller than the threshold,and the green boxes on the right and left sides of the sub-image are removed by the text boundary shrinking function.

4.2.Text boundary combination

In general,the initial text localization results contain many text fractions,which should be combined together.To avoid accepting non-text image parts falsely,we propose to use the SVM output scores for the purpose of boundary combination.We choose two of the initial localized text lines as a pair,which will be combined together if:(1)they are near each other or (2)they are not near each other,but the image part between them is accepted by the SVM.We check for any pairs of the initial text lines.This method is illustrated in Fig.4(a),where boxes a,b and c denote the three initial text boundary boxes before boundary combination.Box D denotes the image part between a and b,and the green

boxes

Fig.4.Text boundary

combination.

Fig.5.Text boundary extension.

1298 C.Jung et al./Image and Vision Computing 27(2009)1295–1301

denote the results after boundary combination.If the SVM score of box D is high,boxes a and b are combined.Then,boxes a,b and c are combined,because boxes b and c are near.However,if the score of D is low,boxes a and b can not be combined.Some related results are shown in Fig.4(b).In these results,the missed text re-gions are correctly recalled by the text boundary combination function.

4.3.Text boundary extension

Sometimes,the initial text localization results are not complete due to complex illumination and background clutter.Therefore,some parts of the text can be missed.To recall the missed text,we add a text boundary extension function based on the SVM out-put score and image similarity measurement,which are combined together to improve the performance.Our purpose is to determine whether the outer sub-image near the initial text line is accepted by the score and image similarity between the central part of the initially detected results and the outer sub-image.This method contains the following four steps.Fig.5gives a clear illustration of the text boundary extension function.

Step 1:Non-parametric PDF estimation

From the central part of each initial text line sub-image,we esti-mate the color PDF (probability density function),p 1,in HS (hue and saturation)color space,using the Parzen window-based meth-od [18]:

p 1ex T?

1nh

X K ex àx i =h T;e3T

where K (á)is the Gaussian kernel function with scale h ,x i the HS col-or vector of the i th pixel in the detected text line sub-image and d the dimension of x (=2).Note that we choose the HS color space be-cause HSV is very similar to human color perception,and ‘‘V”is not used here because it represents the brightness of the color,which should be ignored in our case.In a similar way,the PDF,p 2,of the boundary sub-image is estimated.

Step 2:Image similarity measurement

Measuring the similarity between images (or their PDFs)is an essential issue in low-level computer vision.We use the Bhattacharyya coef?cient,D Bhat ,which is nearly optimal,since it is the upper bound of the Bayesian classi?cation https://www.doczj.com/doc/4411889036.html,aniciu et al.demonstrated that D Bhat achieves impressive performance in texture retrieval and non-rigid tracking [18].The Bhattacharyya coef?cient,D Bhat ,can be regarded as the probability distance be-tween the PDFs,p 1and p 2,which is formulated as

D Bhat ep 1;p 2T?àln Z ??????????????????????????

p 1ex Táp 2ex Tp dx :

e4T

Intuitively,the smaller D Bhat is,the more similar the two sub-images (or their PDFs)are.

Step 3:SVM classi?cation

The SVM output score,f 0,of the boundary sub-image is com-puted (see Table 2).

Step 4:Boundary extension

If D Bhat T 2,the initial text line is extended to include the boundary sub-image.T 1and T 2are the threshold values of sim-ilarity between the two PDFs,and the SVM output score,respec-tively.Then,we go to Step 1to test the next boundary sub-image until no new text region is found.And we select the threshold val-ues,T 1and T 2,using the grid search method.The range and interval for determining these two values are shown in Table 2.Herein,we set the values of T 1and T 2to 0.5and 0.1,respectively.5.Experimental results

We performed the experiments using a PC with one CPU (Intel Pentium 43.0GHz)and 1G RAM with Windows XP using Visual C++6.0.In our ground truth database,there are news images from the South Korea TV channels,KBS,MBC and SBS,whose size are 720*480(the text line height ranges from 10to 150pixels).In these images,435images are for testing and 357images for training.

There are 3965positive and 3453negative samples for training and 7437positive and 8000negative samples for testing.We use a tool named MDTDS (making database for text detection and seg-mentation),implemented by ourselves to semi-automatically col-lect the ground truth textboxes and their attributes in Fig.6.In the ground truthing process,we ?rst open an image with video-text,and then draw a bounding box for each text line in the clip by dragging the mouse over the text area.This tool can easily ad-just the position and size of the text boxes and produce the accu-rate ground truth data.It saves all of the information in the GTD extension format.

We compare the ?ve features using the ground truth data,as shown in Table 3.Through experiments,we found that ‘‘N-gray”

Table 2

Threshold values for boundary extension.Value Range Interval Best para T 1[0,1]0.050.5T 2

[à3,3]

0.1

0.1

Fig.6.Interface of the MDTDS tool.

Table 3

Comparison of ?ve features.Feature

C–V

accuracy Prediction accuracy #of neg.SV #of pos.SV Best para.(C ,c )Distmap 0.91090.90811672160416,14CGV 0.94480.9462131112323,à4DCT 0.95360.9465125012193,à3Gray 0.96240.9520101710713,à6N-gray

0.9699

0.9615

1283

1216

1,à6

C.Jung et al./Image and Vision Computing 27(2009)1295–13011299

is the best feature,having slightly more SVs(support vectors)than ‘‘gray”.Also,the accuracy of the N-gray feature is comparable to the result of the feature combination and selection process de-scribed in[15].

Then,we compare the three methods by means of a perfor-mance evaluation,as shown in Table4.The recall rate and preci-sion rate are de?ned as:

(1)Recall rate=(Number of recalled GT characters)/(Number of

characters in GT).

(2)Precision rate=(Number of validate detected characters)/

(Number of detected text characters).

(3)Number of characters=Text line length/(Text

height?1.0125).

From Table4,we?nd that‘‘N-gray”is the most economic,and ‘‘N-gray+CGV”is the most effective at the cost of the processing speed.Although the precision rate after text line re?nement is the same as the rate before re?nement,the recall rate is about 2%higher than before.Therefore,the text line re?nement module based on the SVM output score results in2%more of the texts that are missed being recalled as compared to the method without text line re?nement.This shows that the proposed method can give more accurate text localization results.Also,it can be seen in Fig.7that the processing time after text line re?nement is about

Table4

Performance comparison of three methods(‘‘1”(‘‘2”)means without(with)text line re?nement).

Feature SVM FRR SVM FAR Recall rate1Precision rate1Recall rate2Precision rate2Time cost per frame(s)

Gray0.0239840.0287200.9242060.9549680.9421380.9565070.29

N-gray0.0205970.0307850.9413580.9610540.9626570.9661020.32

N-gray+CGV0.0169700.0251070.9545110.9700140.9725360.9762410.64

Fig.8.Some experimental results.

1300 C.Jung et al./Image and Vision Computing27(2009)1295–1301

0.04–0.09s per frame higher than before.The processing time is consumed by the text line re?nement module.

It is not easy to compare the proposed method with the state of the art achievements,because each study evaluates the perfor-mance using a different database.However,there are some out-standing studies which are based on the same feature and methodology as our own.For example,Chen[12]used the CGV fea-ture and SVM classi?er,and Kim[19]the Gray feature and SVM classi?er.Also these methods of text localization are similar to our own except for the text line re?nement process.Therefore, we conclude that our method can improve the recall rate of text localization by more than2%in comparison with the state of the art methods.

Fig.8illustrates some examples of the test images.Most of the text is localized robustly and accurately despite the large and small font sizes,colors and languages.The boxes with blue color and other colors are?nal results,and intermediate results of the text localization process,respectively.There are also some wrong re-sults in Fig.8.Some text lines are extended excessively beyond the text regions in Figs.8(j)and(k),because the extended regions are similar to the text regions.In a future work,these false alarms will be eliminated by the feedback from the OCR results.Also a text line with a quite large font-size is missed in Fig.8(l).In the exper-iments,we?nd that the text whose height is larger than150pixels is prone to be missed.We predict that this missed text can be re-called by increasing the pyramid levels of Section3.4.

6.Conclusions and future work

An accurate text localization method based on the SVM output score is proposed in this paper.The SVM output score can pro-vide us with a measure of the closeness of text candidate region to the text.Up to now,most researchers used this score for the veri?cation of the text candidate region[11,12,15].However, we use the output score for re?ning the initial localized text lines and selecting the best localization result from the different pyra-mid levels.In this way,we obtain more accurate text localization results.The performance evaluation results con?rm the effective-ness of the proposed method.The text line re?nement module enables2%more of the texts that are missed to be recalled as compared to the method without text line re?nement.Because the text localization results will be used as the input of OCR (optical character recognition),the proposed accurate text locali-zation method will signi?cantly improve the recognition results. Although the proposed method is designed mainly for localizing the superimposed text in an image,it can also be used for the accurate localization of general texts,including video text,scene text,and so on.

Currently,we provide an accurate text localization method using the SVM output score.Although this method is effective for horizontal text lines,it is susceptible to skew and distortion.Our future work will be related to solving the limitation.Also,text should be clearly segmented from its background to obtain a good recognition result.The approaches used for text segmentation can be classi?ed into the difference based[20–24](or top-down)and similarity-based(or bottom-up)methods[12,25,26].In the future, we will establish a robust technique which segments text from complex backgrounds on the basis of these approaches.Acknowledgements

The partial work reported in this manuscript was conducted while the authors were with the Samsung Advanced Institute of Technology(SAIT).The authors are grateful to Dr.Keehyoung Joo and Dr.Young-Kyung Park for their valuable discussions and the anonymous reviewers for their useful comments.

References

[1]N.Dimitrova,H.J.Zhang, B.Shahraray,I.Sezan, A.Zakhor,T.Huang,

Applications of video content analysis and retrieval,IEEE Multimedia9(3) (2002)43–55.

[2]Y.Wang,Y.Liu,J.C.Huang,Multimedia content analysis using both audio and

visual clues,IEEE Signal Processing Magazine17(6)(2000)12–36.

[3]K.Jung,K.I.Kim,A.K.Jain,Text information extraction in images and video:a

survey,Pattern Recognition37(5)(2004)977–997.

[4]R.Lienhart,W.Effelsberg,Automatic text segmentation and text recognition

for video indexing,TR-98-009,University of Mannheim,1998.

[5]A.K.Jain,B.Yu,Automatic text location in images and video frames,Pattern

Recognition31(1998)2055–2076.

[6]M.Lyu,J.Song,M.Cai,A comprehensive method for multilingual video text

detection,localization,and extraction,IEEE Transactions on Circuits and Systems for Video Technology15(2)(2005)243–255.

[7]M.Yassin,Y.Hasan,L.J.Karam,Morphological text extraction from images,

IEEE Transactions on Image Processing9(11)(2000)1978–1983.

[8]X.Hua,X.Chen,L.Wenyin,H.J.Zhang,Automatic location of text in video

frames,ACM Multimedia Workshop:Multimedia Information Retrieval(2001).

[9]V.Wu,R.Manmatha,E.M.Riseman,TextFinder:an automatic system to detect

and recognize text in images,IEEE Transactions on Pattern Analysis and Machine Intelligence21(11)(1997)1224–1229.

[10]Xiangrong Chen,Alan L.Yuille,Detecting and reading text in natural scenes,in:

Proc.of IEEE Int.Conf.on Computer Vision and Pattern Recognition,2004,pp.

366–373.

[11]K.Kim,K.Jung,J.H.Kim,Texture-based approach for text detection in image

using support vector machines and continuously adaptive mean shift algorithm,IEEE Transactions on Pattern Analysis and Machine Intelligence 25(12)(2003)1631–1639.

[12]D.Chen,O.Jean-Marc,B.Herve,Text detection and recognition in images and

video frames,Pattern Recognition(2004)595–608.

[13]H.Li,D.Doermann,O.Kia,Automatic text detection and tracking in digital

video,IEEE Transactions on Image Processing9(1)(2000)147–156.

[14]Y.Zheng,H.Li, D.Doermann,Machine printed text and handwriting

identi?cation in noisy document images,IEEE Transactions on Pattern Analysis and Machine Intelligence26(3)(2004)337–353.

[15]Q.Ye,Q.Huang,W.Gao,D.Zhao,Fast and robust text detection in images and

video frames,Image and Vision Computing23(2005)565–576.

[16]V.Vapnik,The Nature of Statistical Learning theory,Springer,New York,1995.

[17]C.Jung,Q.Liu,J.Kim,SVM output score based text line re?nement for accurate

text localization,in:Proc.of IEEE Int.Conf.on Acoustics,Speech and Signal Processing,2008,pp.1945-1948.

[18]https://www.doczj.com/doc/4411889036.html,aniciu,V.Ramesh,P.Meer,Real-time tracking of non-rigid objects

using mean shift,in:Proc.of IEEE https://www.doczj.com/doc/4411889036.html,puter Vision and Pattern Recognition,2000,pp.142–149.

[19]K.I.Kim,K.Jung,S.H.Park,H.J.Kim,Support vector machine-based text

detection in digital video,Patten Recognition34(2001)527–529.

[20]T.Sato,T.Kanade,E.K.Hughes,M.A.Smith,S.Satoh,Video OCR:indexing

digital news libraries by recognition of superimposed caption,ACM Multimedia Systems Special Issue on Video Libraries7–5(1998)385–395. [21]N.Otsu,A threshold selection method from gray-scale histogram,IEEE

Transactions on Systems Man and Cybernetics9(1979)62–66.

[22]F.Chang,G.C.Chen,C.C.Lin,W.H.Lin,Caption analysis and recognition for

building video indexing system,Multimedia Systems10–4(2005)344–355.

[23]C.Wolf,J.Jolion,Extraction and recognition of arti?cial text in multimedia

documents,Pattern Analysis and Applications6(2003)309–326.

[24]K.Zhu,F.Qi,R.Jiang,L.Xu,M.Kimachi,Y.Wu,T.Aizawa,Using adaboost to

detect and segment characters from natural scenes,in:Proc.of Int.Workshop on Camera-based Document Analysis and Recognition,2005,pp.52–58. [25]R.Lienhart,Automatic text recognition in digital videos.in:Proc.of Int.SPIE

Image and Video Processing,vol.4,1996,pp.2666–2675.

[26]K.Wang,J.A.Kangas,W.Li,Character segmentation of color images from

digital camera,in:Proc.of Int.Conf.on Document Analysis and Recognition, 2001,pp.210–214.

C.Jung et al./Image and Vision Computing27(2009)1295–13011301

Excel常用函数及使用方法

excel常用函数及使用方法 一、数字处理 (一)取绝对值:=ABS(数字) (二)数字取整:=INT(数字) (三)数字四舍五入:=ROUND(数字,小数位数) 二、判断公式 (一)把公式返回的错误值显示为空: 1、公式:C2=IFERROR(A2/B2,"") 2、说明:如果是错误值则显示为空,否则正常显示。 (二)IF的多条件判断 1、公式:C2=IF(AND(A2<500,B2="未到期"),"补款","") 2、说明:两个条件同时成立用AND,任一个成立用OR函数。 三、统计公式 (一)统计两表重复 1、公式:B2=COUNTIF(Sheet15!A:A,A2) 2、说明:如果返回值大于0说明在另一个表中存在,0则不存在。 (二)统计年龄在30~40之间的员工个数 公式=FREQUENCY(D2:D8,{40,29} (三)统计不重复的总人数 1、公式:C2=SUMPRODUCT(1/COUNTIF(A2:A8,A2:A8)) 2、说明:用COUNTIF统计出每人的出现次数,用1除的方式把出现次数变成分母,然后相加。

(四)按多条件统计平均值 =AVERAGEIFS(D:D,B:B,"财务",C:C,"大专") (五)中国式排名公式 =SUMPRODUCT(($D$4:$D$9>=D4)*(1/COUNTIF(D$4:D$9,D$4:D$9))) 四、求和公式 (一)隔列求和 1、公式:H3=SUMIF($A$2:$G$2,H$2,A3:G3) 或=SUMPRODUCT((MOD(COLUMN(B3:G3),2)=0)*B3:G3) 2、说明:如果标题行没有规则用第2个公式 (二)单条件求和 1、公式:F2=SUMIF(A:A,E2,C:C) 2、说明:SUMIF函数的基本用法 (三)单条件模糊求和 说明:如果需要进行模糊求和,就需要掌握通配符的使用,其中星号是表示任意多个字符,如"*A*"就表示a前和后有任意多个字符,即包含A。 (四)多条求模糊求和 1、公式:=SUMIFS(C2:C7,A2:A7,A11&"*",B2:B7,B11) 2、说明:在sumifs中可以使用通配符* (五)多表相同位置求和 1、公式:=SUM(Sheet1:Sheet19!B2) 2、说明:在表中间删除或添加表后,公式结果会自动更新。

excel 函数的公式语法和用法

SUMIF 函数的公式语法和用法。 说明 使用SUMIF函数可以对区域中符合指定条件的值求和。例如,假设在含有数字的某一列中,需要让大于5 的数值相加,请使用以下公式: =SUMIF(B2:B25,">5") 在本例中,应用条件的值即要求和的值。如果需要,可以将条件应用于某个单元格区域,但却对另一个单元格区域中的对应值求和。例如,使用公式=SUMIF(B2:B5, "John", C2:C5)时,该函数仅对单元格区域C2:C5 中与单元格区域B2:B5 中等于“John”的单元格对应的单元格中的值求和。注释若要根据多个条件对若干单元格求和,请参阅SUMIFS 函数。 语法 SUMIF(range, criteria, [sum_range]) SUMIF函数语法具有以下参数: range必需。用于条件计算的单元格区域。每个区域中的单元格都必须是数字或名称、数组或包含数字的引用。空值和文本值将被忽略。 criteria必需。用于确定对哪些单元格求和的条件,其形式可以为数字、表达式、单元格 引用、文本或函数。例如,条件可以表示为32、">32"、B5、32、"32"、"苹果" 或TODAY()。 要点任何文本条件或任何含有逻辑或数学符号的条件都必须使用双引号(") 括起来。如果条件为数字,则无需使用双引号。

sum_range可选。要求和的实际单元格(如果要对未在range 参数中指定的单元格求和)。 如果sum_range参数被省略,Excel 会对在range参数中指定的单元格(即应用条件的单元格)求和。 注释 sum_range 参数与range参数的大小和形状可以不同。求和的实际单元格通过以下方法确定:使用sum_range参数中左上角的单元格作为起始单元格,然后包括与range参数大小和形状相对应的单元格。例如: 如果区域是并且sum_range 是则需要求和的实际单元格是 A1:A5 B1:B5 B1:B5 A1:A5 B1:B3 B1:B5 A1:B4 C1:D4 C1:D4 A1:B4 C1:C2 C1:D4 可以在criteria参数中使用通配符(包括问号(?) 和星号(*))。问号匹配任意单个字符; 星号匹配任意一串字符。如果要查找实际的问号或星号,请在该字符前键入波形符(~)。示例 示例1 如果将示例复制到一个空白工作表中,可能会更容易理解该示例。 如何复制示例? 1.选择本文中的示例。

idea,代码模板

竭诚为您提供优质文档/双击可除 idea,代码模板 篇一:intellij的代码完成技巧 在这篇文章中,我想向您展示intellijidea中最棒的 20个代码自动完成的特性,可让java编码变得更加高效。对任何集成开发环境来说,代码的自动完成都是最最重要的一项功能,它根据你输入的内容进行预判并帮你自动完成你想输入的代码,有时候甚至都不知道自己想要输入什么,例如一些类名、成员、方法或者是其他方面的内容。 intellijidea中的代码自动完成的功能之所以能让我 如此印象深刻,原因是idea真的理解你的代码以及你当前 所在的上下文。接下来我们将这些最重要的特性进行简单的说明,这些特性让idea显示出比其他ide更棒的表现。即 刻完成instantcompletion 第一个也是最吸引我的就是“即刻完成”特性,不同于其他ide,idea可在任意地方提供这个功能,而不只是当你要访问某个类的成员时。只需要输入单词的首字母,intellijidea就会立即给出最相关的、最适合此处代码编辑需要的选项共你选择。

想要了解“即刻完成”是多么的有效,你可以浏览这个演示视频. 类型感知的自动完成type-awarecompletion 另一个特性同样打破了常规的代码自动完成的方法,称为:智能自动完成。你可能已经知道,idea中包含不止一种自动完成的功能,包括基本自动完成space和智能自动完成space. 当你呼出智能自动完成时,它会将建议列表中的不适用的条目过滤掉,只显示可用的类、变量、属性或者方法,这个提升了性能而且可以避免不必要的错误,如果你试用下这个功能,你肯定会时刻想到它。 静态属性和方法 staticfieldsandmethods 就如同刚才我说的,idea一直走在你的想法之前。如果你不记得一些静态成员的类名,你只需要开始输入静态成员的名称,然后调用两次“自动完成”,idea会给你提供正确的可供选择的列表,甚至是通过静态import进来的方法,如果你调用intentionaction的话. 再一次说明,如果你使用智能自动完成,它将只提供可正确使用的选项。 链式自动完成chaincompletion 接下来的一个可让你更多的提升编码效率的自动完成

EXCEL中常用函数及使用方法

EXCEL中常用函数及使用方法 Excel函数一共有11类:数据库函数、日期与时间函数、工程函数、财务函数、信息函数、逻辑函数、查询和引用函数、数学和三角函数、统计函数、文本函数以及用户自定义函数。 1.数据库函数 当需要分析数据清单中的数值是否符合特定条件时,可以使用数据库工作表函数。例如,在一个包含销售信息的数据清单中,可以计算出所有销售数值大于1,000 且小于2,500 的行或记录的总数。Microsoft Excel 共有12 个工作表函数用于对存储在数据清单或数据库中的数据进行分析,这些函数的统一名称为Dfunctions,也称为D 函数,每个函数均有三个相同的参数:database、field 和criteria。这些参数指向数据库函数所使用的工作表区域。其中参数database 为工作表上包含数据清单的区域。参数field 为需要汇总的列的标志。参数criteria 为工作表上包含指定条件的区域。 2.日期与时间函数 通过日期与时间函数,可以在公式中分析和处理日期值和时间值。 3.工程函数 工程工作表函数用于工程分析。这类函数中的大多数可分为三种类型:对复数进行处理的函数、在不同的数字系统(如十进制系统、十六进制系统、八进制系统和二进制系统)间进行数值转换的函数、在不同的度量系统中进行数值转换的函数。 4.财务函数 财务函数可以进行一般的财务计算,如确定贷款的支付额、投资的未来值或净现值,以及债券或息票的价值。财务函数中常见的参数: 未来值(fv)--在所有付款发生后的投资或贷款的价值。 期间数(nper)--投资的总支付期间数。 付款(pmt)--对于一项投资或贷款的定期支付数额。 现值(pv)--在投资期初的投资或贷款的价值。例如,贷款的现值为所借入的本金数额。 利率(rate)--投资或贷款的利率或贴现率。 类型(type)--付款期间内进行支付的间隔,如在月初或月末。 5.信息函数 可以使用信息工作表函数确定存储在单元格中的数据的类型。信息函数包含一组称为IS 的工作表函数,在单元格满足条件时返回TRUE。例如,如果单元格包含一个偶数值,ISEVEN 工作表函数返回TRUE。如果需要确定某个单元格区域中是否存在空白单元格,可以使用COUNTBLANK 工作表函数对单元格区域中的空白单元格进行计数,或者使用ISBLANK 工作表函数确定区域中的某个单元格是否为空。 6.逻辑函数 使用逻辑函数可以进行真假值判断,或者进行复合检验。例如,可以使用IF 函数确定条件为真还是假,并由此返回不同的数值。

webstorm、phpstorm、idea等使用技巧

概述 此文章用来记录jetbrain系列工具使用的小技巧。如果没有特别说明的话,这些技巧在webstorm、phpstorm、idea中是通用且是跨平台的。 live edit功能与浏览器实现同步实现步骤 live edit是一个免刷新的功能,能捕捉到页面的改动(css、html、js改动),然后浏览器自动刷新,这样提高了开发的效率。很赞的一个功能,唯一不足的是,要安装JB插件到浏览器中,对于不能安装该插件的浏览器来说,该功能就鸡肋了。 1. 打开WebStore的设置对话框,找到live edit选项,选中Enable live editing。 2. 打开Chrome浏览器,进入Chrome网上商店,搜索JetBrains IDE Suport扩展程序。点击“添加至 Chrome“按钮,安装该扩展程序到Chrome。这时候,Chrome浏览器工具栏上就会出现一个JB图标。 注(有时可能在网上商店里搜索不到,我这里提供了一个地 址:https://https://www.doczj.com/doc/4411889036.html,/webstore/detail/jetbrains-ide- support/hmhgeddbohgjknpmjagkdomcpobmllji ) 3. 在WebStorm中新建一个html文件,然后在页面内单击鼠标右键,选择debug选项。

4. 这时,webStorm会启动默认浏览器,并且激活JB浏览器插件。 5. 返回到Webstorm编辑器,尝试修改页面中的内容,然后打开浏览器,看看页面是否自动刷新了没。如 果配置正确的话,一改动代码,浏览器会立刻刷新的,这简直就是重构开发的圣器呀。webstorm内置服务器失效的问题 默认情况下,我们可以直接运行本地的html页面,它内部会开启64432端口来运行我们的页面,这样我们不要配置烦杂的后台环境,也可以简单测试一些需要服务器配合的页面了。启动服务的方式如下:

EXCEL电子表格中四个常用函数的用法

EXCEL电子表格中四个常用函数的用法 (2010-01-16 09:59:27) 转载▼ 分类:Excel学习 标签: 杂谈 EXCEL电子表格中四个常用函数的用 法 现在介绍四个常用函数的用法:COUNT(用于计算单元格区域中数字值的个数)、COUNTA(用于计算单元格区域中非空白单元格的个数)、COUNTBLANK(用于计算单元格区域中空白单元格的个数)、COUNTIF(用于计算符合一定条件的COUNTBLANK单元格个数)。 结合例子将具体介绍:如何利用函数COUNTA统计本班应考人数(总人数)、利用函数COUNT统计实际参加考试人数、利用函数COUNTBLANK统计各科缺考人数、利用函数COUNTIF统计各科各分数段的人数。首先,在上期最后形成的表格的最后添加一些字段名和合并一些单元格,见图1。 一、利用函数COUNTA统计本班的应考人数(总人数) 因为函数COUNTA可以计算出非空单元格的个数,所以我们在利用此函数时,选取本班学生名字所在单元格区域(B3~B12)作为统计对象,就可计算出本班的应考人数(总人数)。 1.选取存放本班总人数的单元格,此单元格是一个经过合并后的大单元格(C18~G18); 2.选取函数;单击菜单“插入/函数”或工具栏中的函数按钮f*,打开“粘贴函数”对话框,在“函数分类”列表中选择函数类别“统计”,然后在“函数名”列表中选择需要的函数“COUNTA”,按“确定”按钮退出“粘贴函数”对话框。 3.选取需要统计的单元格区域;在打开的“函数向导”对话框中,选取需要计算的单元格区域B3~B13,按下回车键以确认选取;“函数向导”对话框图再次出现在屏幕上,按下“确定”按钮,就可以看到计算出来本班的应考人数(总人数)了。

IDEA 操作图解

IDEA 操作图解 一、文件管理及数据输入输出 1.设置工作夹 建议将每份审计或调查的数据文件存于不同的文件夹或目录中来简化对数据库及其相关的审计/调查文件的管理工作。 设置工作夹选项用于选择当前的NT或Windows资源管理器中已有的或创建的工作目录或文件夹。 可以在客户属性对话框中设置客户(顾客)名称及与该审计过程相关的工作文件夹。这样它将显示在工作目录的所有报告中。 新建文件夹 设置工作夹

2.数据输入 在IDEA导入助理的提示下,您可以导入几乎所有类型的来源文件。这些文件包括Oracle、SAP/AIS、AS400、Lotus、Microsoft Access、Microsoft Excel、DBASE、定界文本文件、定长文本文件以及ODBC数据文件等。 IDEA拥有一个辅助产品——记录定义编辑器(RDE),用以导入更多含有变量长度记录或多种记录类型的复杂文件。它也可以重新定义由导入助理建立及保存的文件。此外,IDEA还可以导入XML(包括XBRL)文件,通过可下载导入插件来导入特定的财务程序包,以及导入“打印到文件”格式的报告。IDEA的改进功能还包括在导入的同时,为文件新建记录号码字段和计算字段。 3.数据输出 您可以将IDEA数据库以所有常见格式导出,包括HTML、XML,以便于文书和电子表格的处理。IDEA也可以为数据库、报告和最终结果创建PDF和RTF格式文件。

二、数据提取及抽样 1.提取 直接提取: 提取或者说测试特例,是IDEA中最常用的功能,用来识别特殊项目,如超过10000美元的支付额或特定日期之前的交易。使用等式编辑器输入提取标准,则所有符合这一特殊标准的记录将被输出到一个新的数据库里。您可以在数据库中执行单独的提取,也可以同时执行最多50个项目的提取。 索引提取: 使用这个提取选项,您可以为IDEA在数据库中查找的数据设定一个范围。当复核一个大型数据库时,索引提取便可为您节省许多时间。因您事先为这次搜索设定一个范围,因而无需再搜索整个数据库。 关键值提取: 这个提取选项能够使您利用主要数据库中的共通数值而迅速建立一系列二级数据库。在关键值提取过程中,您无需建立等式。一个关键名就是查询数据库信息的一个索引。一个关键值也就是关键名检索的一个依据。 顶层记录提取: 使用这个提取选项,可以使您很方便地在数据库中提取指定关注项目的头几笔记录的信息,例如:查找所有客户中与每个客户交易的头三笔记录情况。同理,还可以查找与每个客户交易的头四笔记录、五笔记录等等,可以由使用者自己设置完成。 数据提取界面:

sumifs函数多条件求和实例

s u m i f s函数多条件求和实 例 Prepared on 22 November 2020

sumifs函数多条件求和实例 内容提要:文章首先介绍sumifs函数基本用法,然后以一个综合的实例来剖析sumifs函数的详细深入使用。 第一部分,sumifs函数用法介绍 excel中sumifs函数是2007以后版本新增的多条件求和函数。 sumifs函数的语法是:SUMIFS(求和区域,条件区域1,条件1,[条件区域2,条件2],...) 说明:[]以内的条件区域2、条件2为可选参数。最多允许127个区域/条件对。 第二部分,sumifs函数实例介绍 项目一:客户A的销售额 =SUMIFS(C2:C10,A2:A10,A2) 项目二:客户A的1月份销售额 =SUMIFS(C2:C10,A2:A10,A2,B2:B10,B2) 项目三:客户A的1月份和3月份销售额 =SUM(SUMIFS(C2:C10,A2:A10,A2,B2:B10,{1,3})) 项目四:客户A和C的销售额 =SUM(SUMIFS(C2:C10,A2:A10,{"A","C"})) 项目五:客户A和C的1月份销售额合计 =SUM(SUMIFS(C2:C10,A2:A10,{"A","C"},B2:B10,B2)) 项目六:客户A的1月份和客户C的3月份销售额合计 =SUM(SUMIFS(C2:C10,A2:A10,{"A","C"},B2:B10,{1,3})) 项目七:客户A和客户C的1月份\3月份\4月份销售额合计 =SUM(SUMIFS(C2:C10,A2:A10,{"A","C"},B2:B10,{1;3;4}))

Excel 实验指导——函数的使用

实验八学生成绩表------数学函数、统计函数 一、实验目的 1、掌握数组公式、Sum()函数 2、掌握Averageif()和Sumif() 3、掌握COUNT()、COUNTA()、COUNTIF()、Countblank() 4、RANK.EQ 5、数据库函数的使用 二、实验内容 1、利用数组公式或Sum()函数来统计每个同学上学期的总分。 2、利用Averageif()和Sumif()统计平均分和总分。 3、利用统计函数统计班级人数,每门课程不及格人数,缺考科目数。 4、对班级同学的考试情况进行排名。 5、选择合适的数据库函数统计信息 三、实验任务 小王是班级学习委员,现正值新学期评优时期,班主任委托小王统计班级同学上学期的考试成绩情况。小王要应用函数分析学生信息、计算考试成绩,分析每科成绩的最高分、最低分和平均分,统计每个学生的总分排名,并统计不同寝室的学习情况。 本例效果图如图9- 1所示,小王需要完成的工作包括: (1)统计每个同学各门课程的总分并排名。 (2)统计每个寝室的平均分。 (3)统计每门课程的不及格人数和缺考人数。 (4)统计符合特定条件的学生信息。 图9- 1 学生成绩表效果图

9. 3 案例实现 9.3.1统计班级每个学生的考试总分 1.使用一般公式方法 公式是Excel工作表中进行数值计算的等式,公式输入是以“=”开始的,简单的公式有加、减、乘、除等计算。 我们可以在I3单元格中编辑公式,输入“=D3+E3+F3+G3+H3”,回车后即可,其他同学的总分可以通过填充柄拖动来求得。 2.数组公式计算总分 Excel中数组公式非常有用,尤其在不能使用工作表函数直接得到结果时,数组公式显得特别重要,它可建立产生多值或对一组值而不是单个值进行操作的公式。 输入数组公式首先必须选择用来存放结果的单元格区域(可以是一个单元格),在编辑栏输入公式,然后按Ctrl+Shift+Enter组合键锁定数组公式,Excel将在公式两边自动加上花括号“{}”。注意:不要自己键入花括号,否则,Excel认为输入的是一个正文标签。 利用数组公式计算I3:I32单元格的总分。选中I3:I32单元格,然后按下“=”键编辑加法公式计算总分,因为数组公式是对一组值进行操作,所以直接用鼠标选择D3:D32,按下“+”号,再用鼠标选择其余科目成绩依次累加,然后按Ctrl+Shift+Enter组合键完成数组公式的编辑,如图9- 2所示。 图9- 2 数组公式 在数组公式的编辑过程中,第一步选中I3:I32单元格尤为关键。绝不能开始只选中I3单元格,在最后用填充柄填充其他单元格,那样其他单元格的左上角将会出现绿色小三角,是错误的方法。 3.使用Sum()函数计算总分 Sum()求和函数,可以用来计算总分列。选择I3单元格,使用“公式”→“插入函数”或“自动求和”按钮,可选择Sum()函数,选中求和区域D3:H3,如图9- 3所示,按Enter 键,求和结果显示在单元格中。 通过填充操作完成其余各行总分的计算。

excel 函数的公式语法和用法

SUMIF 函数(函数:函数是预先编写的公式,可以对一个或多个值执行运算,并返回一个或多个值。函数可以简化和缩短工作表中的公式,尤其在用公式执行很长或复杂的计算时。)的公式语法和用法。 说明 使用SUMIF函数可以对区域(区域:工作表上的两个或多个单元格。区域中的单元格可以相邻或不相邻。)中符合指定条件的值求和。例如,假设在含有数字的某一列中,需要让大于5 的数值相加,请使用以下公式: =SUMIF(B2:B25,">5") 在本例中,应用条件的值即要求和的值。如果需要,可以将条件应用于某个单元格区域,但却对另一个单元格区域中的对应值求和。例如,使用公式=SUMIF(B2:B5, "John", C2:C5)时,该函数仅对单元格区域C2:C5 中与单元格区域B2:B5 中等于“John”的单元格对应的单元格中的值求和。 若要根据多个条件对若干单元格求和,请参阅SUMIFS 函数。 语法 SUMIF(range, criteria, [sum_range]) SUMIF函数语法具有以下参数(参数:为操作、事件、方法、属性、函数或过程提供信息的值。):

range必需。用于条件计算的单元格区域。每个区域中的单元格都必须是数字或名称、数组或包含数字的引用。空值和文本值将被忽略。 criteria必需。用于确定对哪些单元格求和的条件,其形式可以为数字、表达式、单元格引用、文本或函数。例如,条件可以表示为32、">32"、B5、32、"32"、"苹果" 或TODAY()。 任何文本条件或任何含有逻辑或数学符号的条件都必须使用双引号(") 括起来。如果条件为数字,则无需使用双引号。 sum_range可选。要求和的实际单元格(如果要对未在range 参数中指定的单元格求和)。 如果sum_range参数被省略,Excel 会对在range参数中指定的单元格(即应用条件的单元格)求和。 sum_range 参数与range参数的大小和形状可以不同。求和的实际单元格通过以下方法确定:使用sum_range参数中左上角的单元格作为起始单元格,然后包括与range参数大小和形状相对应的单元格。例如: 如果区域是并且sum_range 是则需要求和的实际单元格是 A1:A5 B1:B5 B1:B5 A1:A5 B1:B3 B1:B5 A1:B4 C1:D4 C1:D4 A1:B4 C1:C2 C1:D4 可以在criteria参数中使用通配符(包括问号(?) 和星号(*))。问号匹配任意单个字符; 星号匹配任意一串字符。如果要查找实际的问号或星号,请在该字符前键入波形符(~)。示例

【优质文档】sumif函数的使用方法word版本 (2页)

【优质文档】sumif函数的使用方法word版本 本文部分内容来自网络整理,本司不为其真实性负责,如有异议或侵权请及时联系,本司将立即删除! == 本文为word格式,下载后可方便编辑和修改! == sumif函数的使用方法 sumif函数的使用方法 使用SUMIF函数可以对区域(区域:工作表上的两个或多个单元格。区域中的单元格可以相邻或不相邻。)中符合指定条件的值求和。例如,假设在含有数字 的某一列中,需要让大于5的数值相加,请使用以下公式: =SUMIF(B2:B25,">5") 在本例中,应用条件的值即要求和的值。如果需要,可以将条件应用于某个单 元格区域,但却对另一个单元格区域中的对应值求和。例如,使用公式 =SUMIF(B2:B5,"俊元",C2:C5)时,该函数仅对单元格区域C2:C5中与单元格区 域B2:B5中等于“俊元”的单元格对应的单元格中的值求和。 注释若要根据多个条件对若干单元格求和,请参阅SUMIFS函数。语法 SUMIF(range,criteria,[sum_range]) SUMIF函数语法具有以下参数(参数:为操作、事件、方法、属性、函数或过程 提供信息的值。):range必需。用于条件计算的单元格区域。每个区域中的 单元格都必须是数字或名称、数组或包含数字的引用。空值和文本值将被忽略。criteria必需。用于确定对哪些单元格求和的条件,其形式可以为数字、表达式、单元格引用、文本或函数。例如,条件可以表示为32、">32"、B5、32、"32"、"苹果"或TODAY()。 要点任何文本条件或任何含有逻辑或数学符号的条件都必须使用双引号(")括起来。如果条件为数字,则无需使用双引号。sum_range可眩要求和的实际单元 格(如果要对未在range参数中指定的单元格求和)。如果sum_range参数被 省略,Excel会对在range参数中指定的单元格(即应用条件的单元格)求和。 注释sum_range参数与range参数的大小和形状可以不同。求和的实际单元格 通过以下方法确定:使用sum_range参数中左上角的单元格作为起始单元格, 然后包括与range参数大小和形状相对应的单元格。例如:如果区域是并且 sum_range是则需要求和的实际单元格是 A1:A5B1:B5B1:B5A1:A5B1:B3B1:B5A1:B4C1:D4C1:D4A1:B4C1:C2C1:D4可以在criteria参数中使用通配符(包括问号(?)和星号(*))。问号匹配任意单个字符;星号匹配任意一串字符。如果要查找实际的问号或星号,请在该字符前键 入波形符(~)。注解使用SUMIF函数匹配超过255个字符的字符串时,将返回不正确的结果#VALUE!。示例示例1

Excel中sumif和sumifs函数进行条件求和的用法

Excel中sumif和sumifs函数进行条件求和的用法 sumif和sumifs函数是Excel2007版本以后新增的函数,功能十分强大,实用性很强,本文介绍下Excel中通过用sumif和sumifs函数的条件求和应用,并对函数进行解释,希望大家能够掌握使用技巧。 工具/原料 Excel 2007 sumif函数单条件求和 1. 1 以下表为例,求数学成绩大于(包含等于)80分的同学的总分之和 2. 2 在J2单元格输入=SUMIF(C2:C22,">=80",I2:I22)

3. 3 回车后得到结果为2114,我们验证一下看到表中标注的总分之和与结果一致 4. 4 那么该函数什么意思呢?SUMIF(C2:C22,">=80",I2:I22)中的C2:C22表示条件数据列,">=80"表示筛选的条件是大于等于80,那么最后面的I2:I22就是我们要求的总分之和

END sumifs函数多条件求和 1. 1 还是以此表为例,求数学与英语同时大于等于80分的同学的总分之和 2. 2 在J5单元格中输入函数=SUMIFS(I2:I22,C2:C22,">=80",D2:D22,">=80")

3. 3 回车后得到结果1299,经过验证我们看到其余标注的总分之和一致 4. 4 该函数SUMIFS(I2:I22,C2:C22,">=80",D2:D22,">=80")表示的意思是,I2:I22是求和列,C2:C22表示数学列,D2:D22表示英语列,两者后面的">=80"都表示是大于等于80

END 注意 1. 1 sumif和sumifs函数中的数据列和条件列是相反的,这点非常重要,千万不要记错咯

EXCEL函数使用方法――积分下载下来的

EXCEL函数使用方法 1.求和函数SUM 语法: SUM(number1,number2,...)。 参数: number 1、number 2...为1到30个数值(包括逻辑值和文本表达式)、区域或引用,各参数之间必须用逗号加以分隔。 注意: 参数中的数字、逻辑值及数字的文本表达式可以参与计算,其中逻辑值被转换为1,文本则被转换为数字。如果参数为数组或引用,只有其中的数字参与计算,数组或引用中的空白单元格、逻辑值、文本或错误值则被忽略。 应用实例一: 跨表求和 使用SUM函数在同一工作表中求和比较简单,如果需要对不同工作表的多个区域进行求和,可以采用以下方法: 选中Excel XP“插入函数”对话框中的函数,“确定”后打开“函数参数”对话框。切换至第一个工作表,鼠标单击“number1”框后选中需要求和的区域。如果同一工作表中的其他区域需要参与计算,可以单击“number2”框,再次选中工作表中要计算的其他区域。上述操作完成后切换至第二个工作表,重复上述操作即可完成输入。“确定”后公式所在单元格将显示计算结果。 应用实例二:

SUM函数中的加减混合运算 财务统计需要进行加减混合运算,例如扣除现金流量表中的若干支出项目。按照规定,工作表中的这些项目没有输入负号。这时可以构造“=SUM (B2:B6,C2:C9,-D2,-E2)”这样的公式。其中B2:B6,C2:C9引用是收入,而 D2、E2为支出。由于Excel不允许在单元格引用前面加负号,所以应在表示支出的单元格前加负号,这样即可计算出正确结果。即使支出数据所在的单元格连续,也必须用逗号将它们逐个隔开,写成“=SUM(B2:B6,C2:C9,-D2,-D3,D4)”这样的形式。 应用实例三: 及格人数统计 假如B1:B50区域存放学生性别,C1:C50单元格存放某班学生的考试成绩,要想统计考试成绩及格的女生人数。可以使用公式“=SUM(IF(B1:B50=″女″,IF (C1:C50>=60,1,0)))”,由于它是一个数组公式,输入结束后必须按住Ctrl+Shift键回车。公式两边会自动添加上大括号,在编辑栏显示为“{=SUM(IF (B1:B50=″女″,IF(C1:C50& gt;=60,1,0)))}”,这是使用数组公式必不可少的步骤。 2.平均值函数AVERAGE 语法: AVERAGE(number1,number2,...)。 参数: number 1、number 2...是需要计算平均值的1~30个参数。 注意:

Intellij_Idea开发工具详细使用文档及常用快捷键整理

Intellij Idea开发工具开发文档 一、JetBrains Intellij Idea9.0.4介绍 (2) 二、IntelliJ IDEA开发运行环境介绍 (3) 1.1IntelliJ IDEA配置JRE运行环境 (3) 1.2IntelliJ IDEA配置编译环境 (8) 1.3IntelliJ IDEA创建项目 (8) 1.4IntelliJ IDEA创建模块 (9) 1.5IntelliJ IDEA导入项目 (12) 1.6IntelliJ IDEA配置Tomcat (15) 1.7IntelliJ IDEA导入模块对模块增加依赖关系 (17) 1.7.1对spring的依赖 (18) 1.7.2对struts2的依赖 (18) 1.7.3对jar包的依赖 (19) 1.8IntelliJ IDEA发布 (22) 1.8.1配置编译class的环境 (22) 1.8.2配置web环境 (23) 1.8.3发布到tomcat运行环境中 (24) 1.8.4启动运行 (24) 1.8.5发布到war文件 (25) 1.9IntelliJ IDEA配置自定义设置 (27) 1.9.1编辑器设置 (27) 1.9.2快捷键设置 (28) 1.9.3文件模版设置 (28) 1.9.4插件设置 (29) 1.9.5文件编码设置 (29) 1.10IDEA优缺点 (30) 1.11常用快捷键整理 (30) 1.12转载 (34) 1.12.1IntelliJ IDEA 使用心得与常用快捷键 (34) 1.12.2从Eclipse转移到IntelliJ IDEA一点心得 (48)

IntelliJ IDEA 社区版 JavaWeb开发指南

IntelliJ IDEA一直被认为最好的java IDE,不过是一款收费软件,因为eclipse,netbeans,两大免费IDE,以往用的人并不多,后IntelliJ IDEA推出了免费的社区版,现在google也推出了基于IntelliJ IDEA的Android Studio IDE,用IntelliJ IDEA的人越来越多。 IntelliJ IDEA终极版集成了很多功能,的确非常强大,但死贵死贵的,IntellIJ IDEA社区版却没多少功能集成,只适合单纯的J2SE,GOLANG,和一个不怎么给力的Android开发功能(13版正式出来后Android开发应该会很给力,期待中)。 很多人在试用IntelliJ IDEA社区版后,都挺失望的,现在java开发大多都是JavaWeb,IntelliJ IDEA社区版没能直接集成J2EE开发功能确实有些不方便,不过把社区版用于JavaWeb开发也不是什么很难的事。现在像各位介绍一种使用IntelliJ IDEA社区版开发JavaWeb的方式。 首先,一个java网站需要些什么? 上面这些? 不,需要的只是WebRoot文件夹下的一部分,如果你没用到web.xml,在高版本的servlet 容器中,这个文件也不是必须的,所以你真正需要的是一个文件夹,里面特定的一些资源。 下面讲怎么用社区版开发调试JavaWeb 1.新建一个java project,新建一个java module,然后在module下新建一个web目录,在web 目录下新建WEB-INF目录,在WEB-INF目录下新建classes目录,lib目录,web.xml文件。

sumif函数与sumifs函数

1.Sumif函数的基础用法和注意事项 Excel中,单条件求和使用比较广泛,但大部分人习惯用透视表。如果只是求有限的条件,且原始数据比较庞大,这时用透视表,透视过程占用内存,速度缓慢,最后还要筛选,显得繁杂。所以,掌握sumif函数显得很有必要。很多人对这个函数还是比较陌生的,毕竟有三个参数。今天简要介绍下,相信大家看完后,一定会惊呼:原来这么简单啊,是的,就这么简单。

需要注意的是,函数虽然简单,但实际上,容易出现这个现象:用这个公式计算,公式确实没错,但结果和原数据中手工筛选出来的数据核对,结果不一样。主要原因有:一是没搞清楚绝对引用和相对引用,导致下拉公式时,需要固定的数据区域发生了变化;二是原始表格的条件区域表格不规范,如上述城市中,部分城市后面或者前面有空格,这样公式得出的结果肯定不一样,因此可以用trim函数去掉空格,这个在vlookup函数中也会存在类似现象,需要引起大家的注意 2.Sumifs函数的基础用法和注意事项 sumifs函数功能十分强大,可以通过不同范围的条件求规定范围的和,且可以用来进行多条件求和,本文在解释语法以后再展示两个实例,以便大家更好理解sumifs函数。 sumifs函数语法 sumifs(sum_range,criteria_range1,criteria1,[riteria_range2,criteria2]...) sum_range是我们要求和的范围 criteria_range1是条件的范围 criteria1是条件 后面的条件范围和条件可以增加。 详细用法请看实例 下面这张成绩单为例,演示sumifs函数用法, 先求男生的语文成绩之和 在G2单元格输入公式=SUMIFS(C2:C8,B2:B8,"男") 得到结果是228,我们看图中男生成绩得分之和与公式得到的结果一致。 再求语文和数学得分都大于等于90分的学生总分之和 在G4单元格输入公式=SUMIFS(F2:F8,C2:C8,">=90",D2:D8,">=90") 7 看到图中语文和数学都大于等于90分的学生只有一个同学,他的总分就是247分,与公式求得的结果完全一致。 补充知识点:offset函数问题。这个函数相对有点难度。完整的说一共有五个参数。函数速成宝典第88课:Offset函数实现动态查询功能。OFFSET(reference,rows,cols,height,width). OFFSET(起始单元格或区域,向下偏移几行,向右偏移几列,返回几行,返回几列)。在这里,大家要特别注意的是:第2和第3个参数如果都是0,起始点包含本行或本列;如果第2和第3个参数为1,起始点不包含本行或本列,就往下偏移一行;第4和第5个参数如果是1,起始点是包含本行和本列。大家改动下第88课素材文件中的SUM(OFFSET(K11,1,1,4,2))公式中的参数看看,就什么都明白了。 二、column函数和columns函数的问题,两者是有区别的。大家看下第27课:Average与

IntellijIdea开发工具详细使用文档

Intellij Idea 开发工具开发文档 JetBrains Intellij Idea9.0.4 介绍 ................ 1.8.1 配置编译 class 的环境 ..................... 23 1.8.2 配置 web 环境 ........................ 24 1.8.3 发布到 tomcat 运行环境中 (25) 1.8.4 启动运行 (25) 1.8.5 发布到 war 文件 (26) 1.9 IntelliJ IDEA 配置自定义设置 (28) 1.9.1 编辑器设置 ......................... 28 1.9.2 快捷键设置 ......................... 29 1.9.3 文件模版设置 ......................... 30 1.9.4 插件设置 ........................... 30 1.9.5 文件编码设置 (31) 目录 1.1 IntelliJ IDEA 配置JRE 运行环境 .................. 3 1.2 IntelliJ IDEA 配置编译环境 .................... 8 1.3 IntelliJ IDEA 创建项目 ...................... 9 1.4 IntelliJ IDEA 创建模块 ...................... 10 1.5 IntelliJ IDEA 导入项目 ...................... 13 1.6 IntelliJ IDEA 配置 Tomcat. ...................................... 16 1.7 IntelliJ IDEA 导入模块对模块增加依赖关系 ............ (19) 1.7. 对 spring 的依赖 ....................... 19 1.7.2 对 struts2 的依赖 ...................... 19 1.7.3 对 jar 包的依赖 ........................... 20 、 IntelliJ IDEA 开发运行环境介绍 .................... 3 1.8 IntelliJ IDEA 发布 (23)

Excel常用函数使用技巧

Excel常用函数使用技巧 目录 一、EXCEL函数概述 二、逻辑函数 三、日期时间函数 四、文本函数 五、数学函数 六、统计函数 七、检视与参照函数

一、EXCEL函数概述 1、函数结构: =函数名称(参数) 说明: §函数以公式形式表达,输入公式应加引导符号= ; §参数:可以是数字、文本、逻辑值(TRUE 或FALSE)、数组、错误值(#N/A)、单元格引用等; §圆括号“()”尽量在英文状态下输入。 2、数据格式: 3、运算符 (1)算术运算符:+ - * / ^ (2)比较运算符: = > < >= <= <> ▲比较运算的结果为逻辑值TRUE或FALSE (3)文本运算符: &(连接多个文本字符,产生一串新文本)

(4)引用运算符: 引用单元格,有相对引用(A5:B7)、绝对引用($A$5:$B$7)和混合引用($A5:B$7)。(5)逻辑运算符:NOT AND OR (6)通配符(特殊): §星号“*”:可代替任意数目的字符(一个或多个)。 §问号“?”:可代替任何单个字符。 §仅对字符型数据进行操作的(包括:Match()、Vlookup()、Countif()、Sumif()等函数),对数值型数据无效的。 4、函数嵌套 函数嵌套是指根据需要,在一个函数的使用过程中调用其他函数。 比如:=WEEKDAY(TODAY(),2); =IF(MAX(A3:A8),IF(),“”) 5、公式运算结果出错信息(举例): (1)#NUM!:数字不能被运算 (2)#NAME!:引用了不能被识别的函数名或单元格(3)#VALUE!:使用了错误的数据类型 (4)#####!:单元格显示不下内容(宽度不够) ▲运算符和数字、文本等连接,可构成表达式(如:A5>=30;left(A4,1)=“王”)

Excel中COUNTIF函数的使用方法汇总

Excel中COUNTIF函数的使用方法汇总 一、求各种类型单元格的个数 (1) 求真空单元格单个数: =COUNTIF(data,"=") (2) 真空+假空单元格个数: =COUNTIF(data,"") 相当于countblank()函数 (3) 非真空单元格个数: =COUNTIF(data,"<>") 相当于counta()函数 (4) 文本型单元格个数: =COUNTIF(data,"*") 假空单元格也是文本型单元格 (5) 区域内所有单元格个数: =COUNTIF(data,"<>""") (6) 逻辑值为TRUE的单元格数量=COUNTIF(data,TRUE) 小说明: EXCEL单元格内数据主要有以下几类:数值型,文本型,逻辑型,错误值型。其中时间类型也是一种特殊的数值。文本类型的数字是文本型。 空单元格:指什么内容也没有的单元格,姑且称之为真空。 假空单元格:指0字符的空文本,一般是由网上下载来的或公式得来的,姑且称之为假空。 date指单元格区域,该参数不能是数组 二、求><=某个值的单元格个数 (1) 大于50 =COUNTIF(data,">50") (2) 等于50 =COUNTIF(data,50) (3) 小于50 =COUNTIF(data,"<50") (4) 大于或等于50 =COUNTIF(data,">=50") (5) 小于或等于50 =COUNTIF(data,"<=50") ¬

(6) 大于E5单元格的值=COUNTIF(data,">"&$E$5) (7) 等于E5单元格的值=COUNTIF(data,$E$5) (8) 小于E5单元格的值=COUNTIF(data,"<"&$E$5) (9) 大于或等于E5单元格的值=COUNTIF(data,">="&$E$5) (10) 小于或等于E5单元格的值=COUNTIF(data,"<="&$E$5) 三、等于或包含某N个特定字符的单元格个数 (1) 两个字符=COUNTIF(data,"??") (2) 两个字符并且第2个是B =COUNTIF(data,"?B") (3) 包含B =COUNTIF(data,"*B*") (4) 第2个字符是B =COUNTIF(data,"?B*") (5) 等于“你好”=COUNTIF(data,"你好") (6) 包含D3单元格的内容=COUNTIF(data,"*"&D3&"*") (7) 第2字是D3单元格的内容=COUNTIF(data,"?"&D3&"*") 注:countif()函数对英文字母不区分大小写,通配符只对文本有效 四、两个条件求个数 (1) >10并且<=15 =SUM(COUNTIF(data,">"&{10,15})*{1,-1}) (2) >=10并且<15 =SUM(COUNTIF(data,">="&{10,15})*{1,-1}) (3) >=10并且<=15 =SUM(COUNTIF(data,{">=10",">15"})*{1,-1}) (4) >10并且<15 =SUM(COUNTIF(data,{">10",">=15"})*{1,-1}) 注:一般多条件计数使用SUMPRODUCT函数,以上方法较少使用,仅供参考。补充:三个区域计数:

相关主题
文本预览
相关文档 最新文档