当前位置:文档之家› 气体传感器——外文翻译

气体传感器——外文翻译

气体传感器——外文翻译
气体传感器——外文翻译

本科生毕业设计

外文资料翻译

题目传感器技术

专业**************

班级********

姓名*******

指导教师**************

所在学院************

附件1.外文资料翻译译文;2.外文原文

多传感器数据融合的多分类器系统

一、引言

在许多应用识别和自动识别的模式中,从不同的传感器监测物理现象提供的免费信息中获得数据是很罕见的。对这类信息的适当组合通常就叫做数据或者信息的融合,而且可以提高分类决策的准确性和信赖度相对于那些基于单个数据源的任何单独的决策。

之前我们已经介绍过Learn++,一种以整体分类为基础的方法,作为一种有效的自动分类算法是能逐步学习的。该算法能够获得额外的数据,在分类系统设计好后就能变成有用的数据了。为了实现增量学习,Learn++生成一个整体的分类器(专家),其中每个分类器都是作为前面的数据库。为了认清数据融合和增量学习之间概念的相似性,我们讨论了数据融合的一些类似的方法:聘用一个正义专家,从提供的数据中训练每个数据,然后战略性的结合他们的输出。我们能发现这些系统的性能在决策应用中是很重要的而且向来是优于那些基于单一的数据源决策的决策在一些基准和真实的数据源世界。

这样一个系统中的应用很多,其中的数据是从相同的应用程序所产生的多种来源(或多个传感器)提供的可能包含补充信息中获得的。例如,在对管道做非破坏性评估时,缺陷信息可从涡流,磁泄漏的图像,超声波扫描,热成像获得,或者几个不同的诊断信息可从不同的医学检测获得,如血液分析心电图,脑电图或者医疗成像设备,如超声波,磁共振或正电子扫描等。直观的,如果来自多个来源的信息可以适当的结合起来,那么分类系统(检测是否有缺陷,或是否可以做出诊断决定)的性能可以得到改善。所以,增量学习和数据融合涉及学习不同的数据集。在增量学习中补充信息必须提取新的数据集,其中可能包含新的分类实例。而在数据融合中补充信息也必须提取新的数据集,其中可能包含代表数据使用不同的特点。

传统的方法一般是根据概率理论(叶贝斯定理,卡尔曼滤波),或登普斯特-谢弗(DS)和它的变化,其中主要用于军事上的应用开发,特别是目标检测和跟踪,如决策理论。以整体分类为基础的方法寻求一个新的和更通用的解决方案提供更广泛的应用。还应当指出的是,在一些应用中如上述的无损检测和医疗诊断等,从不同的来源获得的数据可能已产生不同的物理方式,并因此获得的功能可能是不一样的。虽然在这种情况下使用概率或者决策理论的方法会变得更加的复杂,但异构的功能可以很容易的被安置整体的系统,讨论如下。

一个集成系统结合了集中不同的分类和特定的输出。分类的多样性可以允许使用略有不

同的训练参数,如不同的训练数据集产生不同的决策边界。直觉来看,每个专家会产生不同的错误,而这些分类战略可以降低总的错误。集成系统由于各种应用的报道比单一的分类系统的优越性已在过去十年吸引了极大的关注。

认识到增量学习应用这种方法的潜力,我们最近开发了Learn++,并表明Learn++确实是有能力逐步学习新的数据。此外,该算法不要求对以前使用的数据的访问,并没有忘记以前所学的知识,还能够容纳从以前在早期培训看不见的类的实例。在Learn++中一般的方法,就像人脸检测在其他集成算法中的方法差不多,创建一个集成分类,每个分类学习数据集的一个子集。然后结合使用加权的多数表决的分类。在这方面的贡献,我们回顾了Learn++算法能适当修改数据融合的应用。从本质上讲,从不同的来源或使用不同的功能生成的每个数据集,Learn++生成新的集成分类,然后结合使用加权的多数表决。

二、LEARN ++

Learn++算法的伪码,应用于数据融合问题,见图1,并在下面的段落中详细描述。

对于每个数据库,FS k,k=1,…,K,由一组不同的特点,提出Learn++,算法的输入是:(一)m k训练数据实例的x i随着他们正确的标签y i的序列S k;(二)监督分类算法中相应的分类,生成个人分类(今后,假设);(三)一个整数T K为第k个数据库要生成的分类。

每一种假说h t,在第t个迭代算法中产生,接受不同的训练数据集。这是通过初始化一套重量训练数据,w t,和从w t(第一步)获得的一个分布D t。根据这个分布的训练子集TR t 是来自训练数据S k(步骤2)。分布D t决定更有可能被选择进入训练子集TRT训练数据的实例。TR t在步骤3中被分类,返回第t个假设h t。这一假说的错误,εt 计算在当前数据库S k 上,作为误判实例分配权重的总和(步骤4)。此错误是必须小于1/2 ,以确保最低限度的合理性能,可以从h t预计。如果是这种情况,假设h t接受,则错误归到获得规范化的错误(步骤5)。

=

)

(x

H

final 自变量最大值∑

=

=

Ω

∈?

?

?

?

?

?

y

x

h

t t

t

K

k

y

t

)

(

:

1

1

log

α

β

图1 Learn++数据融合算法

如果εt≥1/2,目前的假设不成立,返回到步骤2中选择一个新的训练子集。所有t假设产生迄今,然后结合使用加权多数投票(WMV)获得复合假说H t。在对WMV,每个假设被分配重量是成反比的错误,给予较高的权重较小的训练误差分类。然后以类似的方式计算错误的复合假说H t,通过H t(步骤6)为来误判实例分配权重的总和复合假说H t错误。

归复合误差B t是获得更新的分配权被分配到第7步中的个别实例,然后使用。正确复合假说H t分类实例的分配权重降低了B t的因素;因此,当分布于下一次迭代的第1步重新正常化时,误判实例的权重有效地增加。这是因为,当一个新的数据集介绍(尤其是与新类或功能),现有的合奏(H t)是可能误认的实例,尚未得到妥善的教训,因此这些情况下的权重增加,迫使该算法把重点放在新的数据。

介绍了在每个数据融合应用中一个权重的额外集。这些权重代表的特定数据源的重要性和可靠性,可以根据以往的经验分配,(例如,诊断神经系统疾病,我们可以知道,磁共振成像(MRI)是更为可靠的脑电图,因此,我们可以选择较高的权重训练与MRI数据分类),或者他们可以在自己的训练数据上,根据整体的表现设置的特定功能训练。我们一套αK等权重计算k个数据集的基础上,第k个数据集上训练的合奏训练中的表现,调整使用αK投票权。我们已经算出了这样的一套关于第k个数据集的基础上的第k个数据集上训练的整体训练中的表现的权重αk,,使用αk调整表决权重。每个分类调整后的权重,在最后假说H final.加权多数表决。

图2是该算法的示意图。Learn++的仿真结果,使用多个数量集的增量学习,相比较其他增量学习的方法,如模糊ARTMAP可以找到。Learn++两个数据融合应用的仿真结果,现介绍如下,其中主要包括额外的细节,更新的结果比提出的有进一步了解。一个涉及超声波和确定管道缺陷漏磁数据的组合,以及其他涉及化学传感器的数据来自多个传感器的组合,这些应用程序都是真实世界的应用。

图2 算法示意图

厚度对ZnO薄膜的CO气体传感器的影响

3 结果与讨论

3.1 结构特征

半导体氧化锌的传感机制,属于表面控制型,气敏表面吸附位/地区决定。表面形貌随薄膜厚度的变化显著,因此暴露目标气体的总吸附面积也可能随薄膜厚度变化。在本文中,吸附面积ZnO薄膜厚度的功能控制,通过改变沉积时间30,60,90,120,150,180分钟。从89纳米到510纳米ZnO薄膜的厚度增加,当沉积时间从30至180分钟不等。

图2 ZnO薄膜的能量弥散的X线分析频谱

正如图2所示,光谱表明,主峰是锌线和O线。在光谱的另一高峰期是从SEM观察到的预先进行金涂层处理的凹峰。

图3 不同厚度的ZnO薄膜表面的SEM形貌

(a)89纳米(b)为240纳米表面的SEM形貌(c)425纳米各种ZnO薄膜的厚度,表面形貌如图3所示。可以看出,薄膜的光滑,作为薄膜厚膜的增加同晶粒尺寸与形貌发达的多。

3.2 薄膜传感器的传感特性

在一般情况下,传感器的灵敏度受工作温度的影响。温度越高,薄膜表面的反应越大,在一定温度范围内给出了更高的灵敏度。

图4 2100 ppm的CO气氛下,氧化锌薄膜的灵敏度和操作温度的关系如图4所示,2100 ppm的CO浓度下,从50℃至350℃,显示A函数的操作温度灵敏度,沉积氧化锌薄膜各种厚度。观察快速增长的敏感度,89纳米的薄膜的操作温度升高到200℃达到最大,随着操作温度的进一步增加而下降。从这个观察中由此可以推断,敏感度的增长随着薄膜厚度的减少,相比较另外的薄膜,89纳米的薄膜的最佳工作温度是最低的。金属氧化物半导体传感器的灵敏度主要取决于目标气体和传感器的表面之间的相互作用。材料的表面积越大,吸附气体和传感器的表面之间的相互作用越强,即气体传感灵敏度较高。它可以从扫描时电子显微镜中观察到形貌,如图2所示,晶粒尺寸在89纳米的薄膜小,而在425纳米晶界是最大的。在这项研究中膜的厚度为89纳米时大。

图5 ZnO薄膜(89纳米厚)暴露于各种CO浓度下的300摄氏度的瞬态响应如图5所示,作为一个CO气体浓度的功能的灵敏度,89 纳米的氧化锌在300℃沉积ZnO气体传感器的灵敏度,例如CO气体浓度的增加从400到2100 ppm,然后当CO气体被转移急剧下降。对于不同的CO浓度,表明气体传感器具有良好的反应。此外,传感器几乎同一时间在不同的CO浓度中达到最高灵敏度。这一结果与操作温度响应时间的控制的结论是一致的。

图6 气体的灵敏度和不同的CO浓度下,89纳米的ZnO薄膜的工作温度的关系。

如图6所示,作为各种CO浓度的功能操作温度的敏感性。由此可以看出,ZnO薄膜的敏感性增加随着CO浓度从200到2100 ppm的变化。

图7 在2100 ppm的CO气氛下,对于89 nm的ZnO薄膜沉积操作温度灵敏度的影响如图7所示,在2100 ppm的CO气体气氛下,对于89纳米的薄膜沉积在不同操作温度下灵敏度与操作时间的动态变化。据观察,操作温度升高到350℃时,灵敏度达到最大。

4.结论

用射频磁控溅射系统获得的各种厚度的CO气体传感器的结构和ZnO薄膜的传感特性进行了研究。结构特征显示晶粒尺寸的提高,因为薄膜厚度增加,导致总面积的减少,结果有一个低的传感灵敏度。此外,气体传感器灵敏度也增加了CO气体的浓度增加。通过提高操作温度,灵敏度以及响应时间进行了改进。在这项研究中获得最高灵敏度为在200℃的操

作温度的89纳米的薄膜。

Multiple Classifier Systems for Multisensor Data Fusion

I. INTRODUCTION

In many applications of pattern recognition and automated identification, it is not uncommon for data obtained from different sensors monitoring a physical phenomenon to provide complimentary information. A suitable combination of such information is usually referred to as data or information fusion, and can lead to improved accuracy and confidence of the classification decision compared to a decision based on any of the individual data sources alone.

We have previously introduced Learn++, an ensemble of classifiers based approach, as an effective automated classification algorithm that is capable of learning incrementally. The algorithm is capable of acquiring novel information from additional data that later become available after the classification system has already been designed. To achieve incremental learning, Learn++ generates an ensemble of classifiers (experts), where each classifier is trained on the currently available database. Recognizing the conceptual similarity between data fusion and incremental learning, we discuss a similar approach for data fusion: employ an ensemble of experts, each trained on data provided by one of the sources, and then strategically combine their outputs. We have observed that the performance of such a system in decision making applications is significantly and consistently better than that of a decision based on a single data source across several benchmark and real world databases.

The applications for such a system are numerous, where data available from multiple sources (or multiple sensors) generated by the same application may contain complementary information. For instance, in non-destructive evaluation of pipelines, defect information may be obtained from eddy current, magnetic flux leakage images, ultrasonic scans, thermal imaging; or different pieces of diagnostic information may be obtained from several different medical tests, such as blood analysis, electrocardiography or electroencephalography, medical imaging devices, such as ultrasonic, magnetic resonance or positron emission scans, etc. Intuitively, if such information from multiple sources can be appropriately combined, the performance of a classification system (in detecting whether there is a defect, or whether a diagnostic decision can be made) can be improved. Consequently, both incremental learning and data fusion involve learning from different sets of data. In incremental learning supplementary information must be extracted from new datasets, which may include instances from new classes. In data fusion, complementary information must be extracted from new datasets, which may represent the data using different features.

Traditional methods are generally based on probability theory (Bayes theorem, Kalman filtering),or decision theory such as the Dempster-Schafer (DS) and its variations, which were primarily developed for military applications, such as notably target detection and tracking. Ensemble of classifiers based approaches seek to provide a fresh and a more general solution for a broader spectrum of applications. It should also be noted that in several applications, such as the nondestructive testing and medical diagnostics mentioned above, the data obtained from different sources may have been generated by different physical modalities, and therefore the features obtained may be heterogeneous. While using probability or decision theory based approaches become more complicated in such cases, heterogeneous features can easily be accommodated by an ensemble based system, as discussed below.

An ensemble system combines the outputs of several diverse classifiers or experts. The diversity in the classifiers allows different decision boundaries to be generated by using slightly different training parameters, such as different training datasets. The intuition is that each expert will make a different error, and strategically combining these classifiers can reduce total error . Ensemble systems have attracted a great deal of attention over the last decade due to their reported superiority over single classifier systems on a variety of applications.

Recognizing the potential of this approach for incremental learning applications, we have recently developed Learn++, and shown that Learn++ is indeed capable of incrementally learning from new data. Furthermore, the algorithm does not require access to previously used data, does not forget previously acquired knowledge and is able to accommodate instances from classes previously unseen in earlier training. The general approach in Learn++, much like those in other ensemble algorithms, such as AdaBoost, is to create an ensemble of classifiers, where each classifier learns a subset of the dataset. The classifiers are then combined using weighted majority voting. In this contribution, we review the Learn++algorithm suitably modified for data fusion applications. In essence, for each dataset generated from a different source and/or using different features, Learn++ generates new ensemble of classifiers, which are then combined using weighted majority voting.

II. LEARN ++

The pseudocode of the Learn++ algorithm, as applied to the data fusion problem, is provided in Figure 1, and is described in detail in the following paragraphs.

For each database, FS k, k=1,…,K, that consists of a different set of features that is submitted to Learn++, the inputs to the algorithm are (i) a sequence S k of m k training data instances x i along with their correct labels y i; (ii) a supervised classification algorithm BaseClassifier, generating individual classifiers (henceforth, hypotheses); and (iii) an integer T k, the number of classifiers to be generated for the k th database.

Each hypothesis h t, generated during the t th iteration of the algorithm, is trained on a different subset of the training data. This is achieved by initializing a set of weights for the training data, w t, and a distribution D t obtained from w t(step1). According to this distribution a training subset TR t is drawn from the training data S k(step 2). The distribution D t determines which instances of the training data are more likely to be selected into the training subset TR t.The Base classifier is trained on TR t in step 3, which returns the t th hypothesis h t.The error of this hypothesis,εt ,is computed on the current database S k as the sum of the distribution weights of the misclassified instances (step 4). This error is required to be less than1/2 to ensure that a minimum reasonable performance can be expected from ht. If this is the case, the hypothesis h t is accepted and the error is normalized to obtain the normalized error (step 5).

Fig. 1. The Learn++ algorithm for data fusion

If εt≥1/2,the current hypothesis is discarded, and a new training subset is selected by returning to step 2. All t hypotheses generated thus far are then combined using weighted majority voting (WMV) to obtain a composite hypothesis H t. In WMV, each hypothesis is assigned a weight that is inversely proportional to its error, giving a higher weight to classifiers with smaller training error. The error of the composite hypothesis H t is then computed in a similar fashion as the sum of the distribution weights of the instances that are misclassified by H t (step 6).

The normalized composite error B t is obtained which is then used for updating the distribution weights assigned to individual instances in Step 7. The distribution weights of the instances correctly classified by the composite hypothesis H t are reduced by a factor of B t; hence when the distribution is re-normalized in step 1 of the next iteration, the weights of the misclassified instances are effectively increased. We note that this weight update rule, based on the performance of the current ensemble, facilitates learning from new data. This is because, when a new dataset is introduced (particularly with new classes or features), the existing ensemble (H t) is likely to misclassify the instances that have not yet been properly learned, and hence the weights of these instances are increased, forcing the algorithm to focus on the new data.

An additional set of weights are introduced in data fusion applications for each ensemble. These weights represent the importance and reliability of the particular data source and can be assigned based on former experience, (e.g., for diagnosing a neurological disorder we may know that magnetic resonance imaging (MRI) is more reliable then electroencephalogram, and we may therefore choose to give a higher weight to the classifiers trained with MRI data), or they can be based on the performance of the ensemble trained on the particular feature set on its own training data. We have calculated a set of such weights αk for the k th dataset based on the training performance of the ensemble trained on the k th dataset, and adjusted the voting weights using αk . The adjusted weight of each classifier is then used during the weighted majority voting for the final hypothesis H final.

The schematic representation of the algorithm is provided in Figure 2. Simulation results of Learn++on incremental learning using several datasets, as well as comparisons to the other methods of incremental learning such as Fuzzy ARTMAP can be found in [11]. The simulation results of Learn++ on two applications of data fusion are presented below, which primarily include additional detail, updated results and further insight than those presented in [14,15]. Both of these

applications are real world applications, one involving the combination of ultrasonic and magnetic flux leakage data for identification of pipeline defects, and the other involving the combination of chemical sensor data from several sensors for identification of volatile organic Compounds.

Fig. 2. Schematic representation of algorithm

THE EFFECT OF THICKNESS ON ZNO THIN FILM CO GAS

SENSOR

3. Results and discussion

3.1 Structural Characteristics

The sensing mechanism of the semiconducting ZnO belongs to the surface-controlled type where the gas sensitivity is determined by the surface adsorption sites/area. The surface morphology varied with the film thickness significantly , therefore the total adsorption area exposed to the target gas may also vary as the film thickness . In this paper, the adsorption area as a function of ZnO film thickness was controlled by changing the deposition time of 30 , 60, 90, 120,150 , 180 min. The ZnO film thickness increased from 89nm to 510 nm when deposition time varied from 30 to 180 min.

Figure 2. EDAX spectrum of ZnO thin film

As shown in Fig. 2, the spectra indicate that the prominent peak is the Zn line followed by O line. The other peak in the spectra is Au peaks from the Gold-coating treatment in preparation for SEM observation.

Fig.3. SEM morphology of surface of ZnO films with various thickness

(a) 89 nm (b) 240 nm (c) 425 nm

The surface morphologies were shown in fig 3 for various ZnO film thickness. It is seen that the film smooth in morphologies with the grain size much more developed as the film thickness was increased.

3.2 Sensing Characteristics of the thin film sensors

In general, the sensitivity of the sensor is affected by the operating temperature. The higher temperature enhances surface reaction of the thin films and gives higher sensitivity in a temperature range.

Fig.4. The relation between sensitivity and operation temperature of ZnO films under 2100 ppm

CO atmosphere

Fig 4 shows the sensitivity as a functions of operation temperature from 50o C to 350o C for the various thickness of deposited Zinc Oxide film under 2100 ppm CO concenteration. A rapid increase in sensitivity was observed as the operation temperature was increased to 200o C and reached a maximum for the 89 nm films and decreased thereafter with further increase in operation temperature. It can be inferred from this observation that the sensitivity increased as the film thickness decreased and optimum operating temperature for 89 nm films is the lowest compare to other films. The sensitivity of the metal oxide semiconductor sensor is mainly determined by the interaction between the target gas and the surface of the sensor. The greater the surface area of the materials, the stronger the interaction between the adsorbed gases and the sensor surface , i.e. the higher the gas sensing sensitivity. It can be observed from SEM morphologies as shown in Fig.2 that the grain size is small in the 89 nm film while it is large in the 425 nm one and grain boundary are the largest when the film thickness is 89 nm in this study.

Fig. 5. Transient responses of ZnO films (89 nm thick) for exposure to various CO concentration

at 300 o C.

The sensitivity as a function of CO gas concentration for the as deposited 89 nm ZnO at 300 o C is shown in Fig. 5. The sensitivity of the ZnO gas sensor increased as the CO gas concentration was increased from 400 to 2100 ppm and it dropped rapidly when the CO gas was removed , indicating that the gas sensor has a good response for different CO concentrations. Besides, it took almost the same time for the sensor to reach the maximum sensitivity for different CO concentrations. This result was consistent with the conclusion for the dominance of operation temperature for the response time.

Fig.6. Relationship between gas sensitivity and operating temperature at various CO

concentrations for 89 nm ZnO films.

Fig. 6 shows the sensitivity as a function of operation temperature for the various CO concentration. It can be seen that the sensitivity of ZnO films was increased from 200 to 2100 ppm CO concentrations.

Fig.7. Effect of operation temperature in the sensitivity for as-deposited 89 nm ZnO thin film

under 2100 ppm CO atmosphere

Fig.7 shows the dynamic variation of sensitivity with operation time at various operation temperature for the as deposited 89 nm ZnO films in 2100 ppm CO gas atmosphere. It was observed that the sensitivity is a maximum when the operation temperature was increased to 350o C.

4. Conclusion

The structures and sensing properties of ZnO films as a CO gas sensor with various thickness obtained by rf magnetron sputtering system were investigated. The structural characteristics reveal that the grain size were enhanced as the film thickness was increased which resulted in decrease in the total surface area and as a result a low sensing sensitivity. Furthermore the sensitivity of the gas sensor also increased as the concentration of CO gas was increased . The sensitivity as well as the response time were improved by increasing the operation temperature. The maximum sensitivity in this study was obtained for the 89 nm film at the operation temperature of 200 o C.

相关主题
文本预览
相关文档 最新文档