当前位置:文档之家› Architecture for Adaptive Multimodal Dialog System Based on VoiceXML

Architecture for Adaptive Multimodal Dialog System Based on VoiceXML

Architecture for Adaptive Multimodal Dialog System Based on VoiceXML
Architecture for Adaptive Multimodal Dialog System Based on VoiceXML

Architecture for adaptive multimodal dialog systems based on VoiceXML Georg Niklfeld(1),Robert Finan(2),Michael Pucher(1)

(1)Telecommunications Research Center Vienna(ftw.)

(2)Mobilkom Austria AG

niklfeld@ftw.at,r.finan@mobilkom.at,pucher@ftw.at

Abstract

This paper describes application oriented research on architec-tural building blocks and constraints for adaptive multimodal dialog systems that use V oiceXML as a component technol-ogy.The V oiceXML standard is well supported and promises to make the development of speech-enabled applications so easy that everyone with general web programming skills can ac-complish it.The paper proposes a software architecture for multimodal interfaces that emphasizes modularity,in order to strengthen this potential by clearly separating different types of development tasks in a multimodal dialog system.The paper ar-gues that adaptivity is a central design concern for multimodal dialog systems,but that adaptivity is not facilitated by the cur-rent V oiceXML standard.This and other examined limitations of V oiceXML for building multimodal dialog systems should be repaired in upcoming work on a successor standard that will explicitly target multimodal applications.

1.Introduction

Multimodal interfaces are important for pending3G telecom-munication data services,which will be invoked from devices that have speech input/output capabilities but only small touch-screens.The V oiceXML standard[1]is important in this con-text because it promises a streamlined development process for speech-enabled applications,which will allow third party soft-ware developers without much speech processing expertise to create the wide range of applications that is needed to make 3G and speech-access attractive to consumers.A broad array of adaptation techniques are required,because in mobile usage environmental conditions vary greatly,and because multimodal interfaces have many open parameters that need to be set for optimum usability in speci?c circumstances.In this paper we describe design work on a multimodal dialog system prototype developed by our group which attempts to re?ect these issues. The paper describes a set of building-blocks that emerge,but also shortcomings of the V oiceXML standard that should be ad-dressed in future derivate standards.

2.Multimodality,adaptivity,standardized

components

This section presents the arguments for the design goals of mul-timodality,adaptivity and the use of standardized components, which motivate the model architecture described in the subse-quent section.

2.1.The importance of multimodality

Human face-to-face communication is multimodal,combining the acoustic channel with visual and occasionally tactile chan-nels.Human beings are therefore well equipped to communi-cate multimodally,and multimodal communication is perceived as very natural.In human-computer interaction,the use of spo-ken or written natural language poses complexities that have led to a dominance of visual interfaces consisting of text and usually2D graphics.Visual interfaces are very ef?cient under some circumstances:written text allows rapid information up-take;visual interfaces on large screens allow for showing large amounts of information simultaneously,and humans are able to focus on bits that interest them without having to process all the rest in-depth;?nally,visual interfaces are preferred consis-tently for input and output of spatial information[2].In visual interfaces,text output can be used for information for which graphical metaphors are not available,text input for unrestricted content or inherently linguistic information such as names.

Yet,considering data services on3G mobile devices,the following factors constitute obstacles for relying solely on vi-sual interfaces,and imply a real usefulness of added speech in-terface capabilities:

The terminal devices where3G data services run will

have small displays in terms of size and resolution,

although a signi?cant improvement over current W AP

phones is expected.Still,it will not be possible to mirror

the user experience of todays commercial web-sites con-

cerning the amount of information that can be presented

simultaneously.(Note also that the speed disadvantage

of TTS is less problematic for small amounts of infor-

mation.)

While3G terminals will be produced in various form-

factors,many of them will be too small to provide

comfortable alphanumeric keyboards,relying instead on

pointing devices such as a pen/touch-screen combina-

tion.Without a keyboard,text input becomes cumber-

some,even when some handwriting recognition/graf?ti

technology is provided.Speech input is a logical alterna-

tive.The input dilemma of3G devices is made yet more

severe where even a pointing device is lacking,like in

current mobile phones.This W AP-like scenario makes

data services without speech support so unattractive that

we consider it unlikely that large numbers of3G devices

intended for data access will be in this category.

In mobile usage situations,visual access to the display

may be impossible in certain situations,such as when

driving a car.Also,even where pointing-device and key-

board are provided,access to them is not possible when

a user has her hands busy for other activities.A speech

interface may still be usable in such situations.

We contend that the combination of these considerations takes the case for multimodal interfaces for3G services beyond a nice-to-have status to that of a signi?cant building-block that

needs to be put in place for successful deployment scenarios of 3G infrastructures to emerge.This is also the motivation for application oriented research like the one described here,which examines the technical feasibility of an economically attractive development model for multimodal interfaces.

2.2.Types of adaptivity

We distinguish four practically relevant types of adaptation tasks in multimodal dialog systems,as indicated in table1.

situation user

attributes attributes

user user-controlled user-controlled

control situation adaptation user adaptation

system system-controlled system-controlled

control situation adaptation user adaptation

Table1:Adaptation types

The discussion in section2.1referred to the variability of usage situations for data services that comes along with mobile usage.We refer to this type of adaptive requirements as user-controlled situation adaptation.It demands user interfaces that allow a?exible choice of modalities at many points during in-teraction with the system.We believe that typically,the visual interface will be the primary interface,which is active at all times but allows the user to branch off into voice usage for a suitable chunk of the interaction.

System-controlled situation adaptation comprises tech-niques where the system autonomously monitors the interac-tion,attempts to reason about situation attributes,and adapts in order to avoid problems.For example,high levels of ambiance noise may lead to poor performance of speech recognition.The system may?nd out about this either by monitoring for low con?dence scores in speech recognition,or by monitoring the frequency of corrections by the user.It may then choose either to deactivate speech input completely,or,in a system that can process coordinated,simultaneous inputs from the visual and speech modalities,it may assign relatively lower weights to the results from speech recognition and more to recognition results from the visual interface,e.g.gesture or handwriting recogni-tion[3].

Under the item of user-controlled user adaptation,multi-modal dialog systems should allow an explicit modality choice to suit a user’s personal preferences. E.g.it should be pos-sible to disable all speech-interface elements of the interface. Where user pro?les are stored between sessions,the system should allow to con?gure an assignment of elements of the in-terface to one or the other modality,either through an explicit pro?le-con?guration mode(e.g.a user might want to assign list-boxes to the speech modality),or by recording the modal-ity choices of the user in the background during interactions. Again,where systems allow coordinated,simultaneous input on multiple modalities,the user should be able to select a relative weighting for the modalities.

Finally,system-controlled user adaptation is an impor-tant issue in speaker independent automatic speech recogni-tion(ASR)systems as common in telephony-oriented dialog systems.As the available amount of adaptation data from a speaker grows during a session,iterative speaker adaptation can reduce the word error rate of a speaker independent system to about half,until the error rate of comparable recognizers that are trained in a speaker dependent fashion is matched[4].

Outside the acoustic domain,other relatively static charac-teristics of users can be re?ected in user models[5]that can lead to adaptation of dialogs or user interfaces.Such user mod-els collect data on:knowledge of the user,to in?uence the amount of guidance provided;abilities of the user,to in?uence the choice of more or less complex input patterns by the system; misunderstandings exhibited by the user,to provide tailored ex-planations of https://www.doczj.com/doc/1d9047725.html,er models can either be acquired new for each user during the interaction(although for telephony based dialog systems,which are typically used infrequently by any user,this may not be feasible),or an existing user model may be attributed to new users,and then adapted further.

While all four discussed adaptation types are important for good multimodal dialog systems,the subsequent discussion of available standardized components and system architectures, and in particular of the V oiceXML standard,will show that these frameworks currently do not support system-controlled adaptation adequately,especially not in those cases where in-formation from speech recognition and from the dialog man-agement platform would have to be integrated.We will return to this problem in section5.

2.3.Standardized components

In1999,the EU-sponsored DISC project undertook a compar-ison of seven toolkits for dialog management in spoken dialog systems[6].The dialog management component is the central part of a dialog system and of particular importance in a discus-sion about streamlined development processes for multimodal interfaces,because it is the bridge between the intelligent inter-face technologies and the underlying application logic of appli-cation servers and databases.The DISC-survey?nds platforms that offer rapid prototyping support,partly via integrated de-velopment environments.As most of the toolkits are shipped together with speech recognition and speech synthesis engines, this frees application developers to focus,?rstly,on the dialog design,and secondly,on interfacing to the application core.

The work done with W3C support on the V oiceXML stan-dard[1]represents another conceptual step forward from that state of affairs for two reasons.Firstly,being an internet stan-dard,V oiceXML goes beyond manufacturer-speci?c toolkits that are not compatible with each other by providing a credible set of target interfaces for all players in the industry.Second, it chooses XML and the associated web-programming technol-ogy both as a format for speci?cation of the dialog manage-ment,and for interfacing to the application logic.The two most important aspects for a standardized component framework for dialog systems have thus been brought in line with web tech-nology,which is what was needed to create that promise of a platform for speech-enabled applications that is easily acces-sible to developers with a general,and not speech processing speci?c,background.Today one can add that V oiceXML has received important support from major players in the industry who have made development platforms for V oiceXML-based applications available free of charge for developers.The only potential downside of recent developments is that the platform manufacturers have also included some proprietary elements in their implementations,making direct transfer of V oiceXML ap-plications from one platform to another again impossible.

While early requirements documents for V oiceXML explic-itly treat multimodality in the context of the discussion on the inclusion of DTMF input in the standard[7],the standard is not written to cover general multimodal applications.Subse-

quently,W3C has drafted a requirements document for a multi-modal dialog language[8]and currently a new working group on that topic is being assembled.At present however,the stan-dard provides no intentional support for general multimodal systems.The most important drawback that we found in our attempts to nevertheless build a multimodal architecture around V oiceXML,is that it is not possible to make an active voice dia-log running in a V oiceXML browser aware of events that occur outside the voice browser,e.g.at a visual interface:V oiceXML neither allows for linking in Java applets that could receive pushed noti?cations,nor does it provide any other interface for external events.Our architecture for multimodal dialog systems based on V oiceXML is considerable in?uenced by this fact.

3.Architecture

In a project at our institution that follows the longer term goal to develop architectures for multimodal dialog systems for3G telecommunications infrastructures,for the bene?t of our sup-porting partner companies from the Austrian telecommunica-tions industry,we are currently developing an architecture that shall:use mainstream technologies and standards as far as pos-sible,to test their capabilities and limitations;be general enough to scale from our?rst small prototypes to bigger systems that are close to market-readiness;provide the basis for usability re-search on multimodal interfaces.

The architecture shall support a multimodal interface that shall combine a visual interface via HTML and Java applets in a visual web browser with a voice interface built using V https://www.doczj.com/doc/1d9047725.html,munication between the visual browser and the voice browser is mediated via a central application server that is built using Java servlet technology in combination with a web server.

In[8],three types of multimodal input are distinguished:

1.sequential multimodal input is the simplest type,where

at each step of the interaction,either one or the other

input modality is active,but never more than one simul-

taneously;

2.uncoordinated,simultaneous multimodal input allows

concurrent activation of more than one modality.How-

ever,should the user provide input on more than one

modality,these informations are not integrated but will

be processed in isolation,in random order;

3.coordinated,simultaneous multimodal input exploits

multimodality to the full,providing for the integration

of complementary input signals from different modali-

ties into joint events,based on timestamping. Because we cannot send events about the visual interface to the dialogs in the voice browser(cf.section2.3),we maintain that only the sequential multimodal input pattern can be properly realized with the current version of the V oiceXML standard: The other patterns require that even while the voice interface is active,e.g.listening for user speech,it must be possible for the multimodal dialog to change state based on inputs received from the visual interface.In the sequential mode on the other hand,it is possible to deactivate the visual interface whenever voice input is activated.

In this case then,the choice of multimodal interaction pat-tern is determined by features of the components used.In many realistic application development efforts,the interaction pattern will be determined by user-level requirements that have to be met.Anyhow,the choice of multimodal interaction pattern will certainly be a dimension in which variation occurs.For the pur-poses of the demonstrator development in our project,it was important to?nd a software architecture that can remain stable across different patterns and across interface types.

This can be accomplished by a modular or object-oriented design which separates the central application server into the following functions:

visual communicator:a modular handler for the visual inter-face;

voice communicator:a modular handler for the voice inter-face;

transaction module:encapsulates the transaction needed for the application logic;

multimodal integrator:handles all message?ows between the interface modules,and between the interface mod-

ules and the transaction module.

The resulting system architecture is shown in Fig.1.

Figure1:Multimodal Architecture

Both the visual and the voice user interface are realized by standardized components in line with general web technologies (visual web browser and V oiceXML browser).Within the appli-cation server,the architecture stipulates a specialized interface handler for each modality.

For example in our prototype,the voice communicator pre-pares V oiceXML documents which it puts on a web server that is associated to the application server.Once the V oiceXML interpreter of the voice browser has been started(after a user-triggered event,e.g.an incoming phone call),each of the V oiceXML documents processed by the interpreter is pro-grammed so as to terminate with a load of a successor docu-ment from the web server.Our voice communicator simply pre-pares these successor documents based on messages it receives from the multimodal integrator.When the voice interface is not active,the prepared documents contain just an idle loop that terminates after some time,e.g.0.5seconds.When the multi-modal integrator decides(based on user input)that a chunk of the interaction shall be performed via voice,it sends a message indicating?eld labels and data-types(e.g.the range of an enu-meration)to the voice communicator,which instead of an idle

V oiceXML document now produces a V oiceXML dialog docu-ment that executes the respective voice dialog with the user and returns results to the multimodal integrator.

The visual communicator is designed similarly to prepare HTML pages and applets for the visual browser.In our proto-type,the visual interface includes controls that allow the user to explicitly activate the voice interface for a group of input?elds. When this is done,the visual communicator deactivates all con-trol elements on the visual interface,and sends a message with the user request for voice dialog to the multimodal integrator.

The multimodal integrator is the only part in the proposed architecture where information from more than one modality is processed.The way this processing is done de?nes the multi-modal interaction pattern.To change the pattern,the architec-ture envisages that one implementation of the multimodal inte-grator would simply replaced by another,without any changes to other parts of the systems.This of course presupposes that the interfaces between the the multimodal integrator and the inter-face handlers have been de?ned so generally that all occurring multimodal interaction patterns are covered.A description of this interface is an area for further study.

4.Application

We believe that within the3G data service scenario discussed earlier,even with only sequential multimodal input it is possible to create multimodal interfaces that are better than existing in-terfaces that are just visual.Currently we are developing multi-modal interfaces to existing web services with visual interfaces and plan to perform comparative usability studies.For our pro-totype,we target a typical route-?nder application,where the user enters two addresses in Vienna and the system returns a graphical map that shows the shortest path between the two lo-cations(such a service is available on the W AP portal of one of our partner companies).On mobile devices that do not provide a comfortable keyboard,entering street names is dif?cult.The range of possible names is also too large(c.8000for Vienna) to make selection from a list-box an option.In our interface,we request to specify the city district or the initial letter of the street name?rst.This can be entered by voice,or via selection from a visual list,and narrows down the search space for possible street names to400names on average.We assume that most users will prefer to use speech recognition for the selection among these 400.If speech recognition proves too unreliable even for that, then it is also possible to specify both district and?rst letter, which will narrow the search space down to c.16names.

5.Discussion

In our view,these preliminary results show that V oiceXML can be used to build simple,but nevertheless useful multimodal in-terfaces for typical data services for mobile access.Once?rst implementations of the voice communicator and the multimodal integrator are available,it should become quite easy for the gen-eral web programmer to generate further multimodal interfaces for existing data services.

Yet,returning to the earlier discussion of adaptivity require-ments for multimodal dialog systems in section2.3,we have noticed in our architecture work that some important types of system-controlled adaptation are not supported by the highly modular approach that underlies both V oiceXML and our mul-timodal https://www.doczj.com/doc/1d9047725.html,rmation about environment character-istics that is easily obtainable in speech recognition,such as the level of ambiance noise and con?dence scores in speech recog-nition,are kept local in the speech recognition component used by the V oiceXML browser(as part of the implementation plat-form,which is not further considered by the standard).They are not accessible in the V oiceXML browser,and therefore neither in the voice communicator,nor in the multimodal integrator, where they could be used to in?uence modality selection.Other types of system-controlled adaptation can be implemented in the multimodal integrator,provided the interface browsers and interface handlers are designed to provide the required informa-tion?ows.

Therefore,while we strongly support modular architectures and components that will allow ef?cient development of multi-modal interfaces by a broad base of web developers,we would also appeal to the standards bodies to include dedicated inter-faces in the component standards to enable system-controlled, centrally processed adaptivity,for example in the currently planned multimodal extension to V oiceXML.

6.Conclusions

This paper has demonstrated that using V oiceXML,a modular architecture for simple,yet useful multimodal interfaces to data services for mobile access can be de?ned.We plan to continue this work to de?ne further building blocks for a development model for speech and multimodal interfaces that is streamlined with web technologies.A problem that we have identi?ed is the lack of support for system-controlled adaptation in V oiceXML. We hope that this will be addressed in future work by the stan-dardization bodies.

Acknowledgements

This work was supported within the Austrian competence center program K plus,and by the companies Alcatel,Connect Austria, Kapsch,Mobilkom Austria,and Nokia.

7.References

[1]W3C,“V oice eXtensible Markup Language(V oiceXML)

version 1.0,”https://www.doczj.com/doc/1d9047725.html,/TR/2000/NOTE-voicexml-20000505/,2000.

[2]S.L.Oviatt,A.DeAngeli,and K.Kuhn,“Integration and

synchronization of input modes during multimodal human-computer interaction,”in Proc.of CHI97,New York,1997, pp.415–422,ACM Press.

[3] A.Rogozan and P.Delegise,“Adaptive fusion of acous-

tic and visual sources for automatic speech recognition,”

Speech Communication,26,no.1-2,pp.149–161,1998.

[4] C.J.Leggetter and P.C.Woodland,“Speaker adaptation of

continuous density HMMs using multivariate linear regres-sion,”in Proc.of ICSLP94,Yokohama,1994,pp.451–454.

[5] A.Kobsa and W.Wahlster,Eds.,User Models in Dialog

Systems,Springer,Berlin,1989.

[6]DISC,“Deliverable d2.7a:State-of-the-art survey of dia-

logue management tools,”Tech.Rep.,Esprit Long-Term Research Concerted Action No.24823,1999.

[7]W3C,“Dialog requirements for voice markup lan-

guages W3C working draft23december1999,”

https://www.doczj.com/doc/1d9047725.html,/TR/voice-dialog-reqs/,1999.

[8]W3C,“Multimodal requirements for voice markup

languages W3C working draft10july2000,”

https://www.doczj.com/doc/1d9047725.html,/TR/multimodal-reqs,2000.

全国各地旅游景点一览表格

全国经典旅游景点一览表 北京 故宫博物院、天坛公园、颐和园、(八达岭-慕田峪)长城旅游区、明十三陵景区、恭王府景区、石花洞风景名胜、店人遗址、什刹海、京杭大运河、八大处、密云县古北口镇、九渡河镇、东坝古镇、王四营 燕京八景:太液秋风、琼岛春阴、金台夕照、蓟门烟树、西山晴雪、玉泉趵突、卢沟晓月、居庸叠翠 天津 古文化街旅游区(津门故里)、盘山风景名胜区、大悲禅院、文庙、挂甲寺、五大道租借区、平津战役纪念馆、天塔湖风景区、南市食品街、周恩来邓颖超纪念馆、大沽口炮台、九龙山国家森林公园、九山顶自然风景区、八仙山国家级自然保护区、霍元甲纪念馆、凯旋王国主题公园、意式风情区 上海 明珠广播电视塔、野生动物园、科技馆、豫园、陆家嘴、外滩万国建筑群、路步行街、世博园 重庆 大足石刻景区、巫山(小三峡)、武隆喀斯特旅游区(天生三桥、仙女山、芙蓉洞) 、酉阳桃花源景区、奉节小寨天坑、白帝城、青龙瀑布红岩革命纪念馆、抗战遗址博物馆、綦江黑山谷景区、南川金、武陵山大裂谷、飞庙 河北 :避暑山庄及周围寺庙景区

:山海关景区、北戴河风景名胜区 :安新白洋淀景区、野三坡景、大慈阁 :西柏坡景区、清西陵 市:清东陵、南湖景区 :崆山白云洞、峡谷群 江 :园林(拙政园、虎丘、留园)、昆山周庄古镇景区、金鸡湖景区、吴江同里古镇景区、寒山寺、网师园、狮子林、沧浪亭、环秀山庄、退思园、锦溪镇、周庄镇、同里镇、甪直镇、木渎镇、虎丘、乐园、玄妙观、盘门三景、乐园 :钟山-陵园景区、夫子庙-淮河风光带景区、明孝陵、鸡鸣寺、玄武湖、灵谷寺、总统府、朝天宫、江南贡院、莫愁湖、燕子矶、阅江楼、雨花台、明城墙、甘熙宅第、瞻园、栖霞寺、汤山温泉、博物院、、莫愁湖、长江大桥、梅花山、白鹭洲公园、中华门、将军山风景区、阳山碑材、静海寺、天妃宫 :影视基地三国水浒景区、灵山大佛景区、鼋头渚风景区、锡惠公园、灵山胜境、善卷洞、竹海 :恐龙城休闲旅游区、武进太湖湾旅游度假区、嬉戏谷、中华孝道园、中国春秋淹城旅游区、淹城春秋乐园、淹城野生动物世界、淹城遗址、大运河、天宁风景名胜区、红梅公园、东坡公园、舣舟亭、红梅阁、青枫公园、亚细亚影视城、武进市新天地公园、天目湖旅游度假区、南山竹海、吴越弟一峰、熊猫馆、天目湖御水温泉、茅山风景名胜区、茅山盐湖城 :瘦西湖景区、高邮旅游集聚区、宝应旅游集聚区、仪征旅游集聚区、抗日战争最后一役纪念馆、大明寺、个园、仙鹤寺、汉陵苑、神居山、盂城驿、龙虬庄遗址、镇国寺塔、文游台、古悟空寺、中国邮驿文化城、高邮湖芦苇荡湿地公园、高邮湖滩郊野公园 :濠河风景区、博物苑、狼山、如皋水绘园 :梅兰芳纪念馆、溱湖国家湿地公园、溱潼古镇、兴化垛田风光带、凤城河景区 :三山景区(金山-北固山-焦山)、茅山、赤山、宝华山、南山、伯先公园、英国领事馆、西津渡、蒜山、招隐寺、圌山、五柳堂、五卅演讲厅、扬中园博园 :云龙湖、云龙山、龟山汉墓、祖园、邳州艾山风景名胜区、马陵山风景区、沛县汉城、汉画像石艺术馆、窑湾古镇、蟠桃山佛教文化景区、安湖湿地公园、博物馆、大洞山风景区、

世界33大著名旅游景点(组图)

第1位—美国大峡谷-TheGrandCanyon 美国大峡谷是一个举世闻名的自然奇观,位于西部亚利桑那州西北部的凯巴布高原上,总面积2724.7平方公里。由于科罗拉多河穿流其中,故又名科罗拉多大峡谷,它是联合国教科文组织选为受保护的天然遗产之一。

第2位—澳大利亚的大堡礁—GreatBarrierReef 世界上有一个最大最长的珊瑚礁群,它就是有名的大堡礁—GreatBarrierReefo它纵贯蜿蜒于澳洲的东海岸,全长2011公里,最宽处161公里。南端最远离海岸241公里,北端离海岸仅16公里。在落潮时,部分的珊瑚礁露出水面形成珊瑚岛。 第3位—美国佛罗里达州—Flori—dl

佛罗里达风景最亮丽的棕榈海滩是全球著名的旅游天堂之一,适宜的气候、美丽的海滩、精美的饮食、艺术展览和文艺演出,即使是最挑剔的游客,在棕榈海滩也能满意而归。每年的四月,棕榈海滩的艺术活动是最丰富多彩的,包括各种海滩工艺品展览,其中于4月4 日启动的棕榈海滩爵士节以展示美国最杰出的爵士音乐而赢得了艺术爱好者的青睐。 第4位—新西兰的南岛-Soutls—land

新西兰位于南太平洋,西隔塔斯曼海与澳大利亚相望,西距澳大利亚1600公里,东邻汤加、斐济国土面积为二十七万平方公里,海岸线长6900千米,海岸线上有许多美丽的海滩。 第5位—好望角一CapeTown

好望角为太平洋与印度洋冷暖流水的分界,气象万变,景象奇妙,耸立于大海,更有高逾二干尺的达卡马峰,危崖峭壁,卷浪飞溅,令人眼界大开。 第6位—金庙-GoldenTemple

金庙位于印度边境城市阿姆利则。作为锡克教的圣地,阿姆利则意为“花蜜池塘”。金庙由锡克教第5代祖师阿尔琼1589年主持建造,1601年完工,迄今已有400年历史。因该庙门及大小19个圆形寺顶均贴满金箔,在阳光照耀下,分外璀璨夺目,一直以来被锡克人尊称为“上帝之殿”。 第7位—拉斯维加斯-LasVegas

全国各省旅游景点大全

北京:八达岭故宫什刹海圆明园玉渊潭龙庆峡 十三陵天安门香山颐与园天坛十渡 百花山潭柘寺雍与宫幽谷神潭紫竹院黑龙潭 康西草原中央电视塔 澳门 : 妈祖阁大三巴牌坊澳门文化中心澳门博物馆玫瑰圣母堂竹湾海滩辽宁 : 沈阳故宫千山昭陵玉佛苑本溪水洞金石滩 虎滩乐园鸭绿江大桥辽宁省博物馆棒棰岛大孤山风景名胜区海王九岛 赫图阿拉城怪坡星海公园 重庆; 三峡大坝葛洲坝瞿塘峡歌乐山巫峡渣滓洞 白帝城白公馆丰都鬼城石宝寨芙蓉洞缙云山 金佛山宝顶山四面山 西藏: 珠穆朗玛峰大昭寺然乌湖布达拉宫纳木错墨脱 圣湖八廓街扎什伦布寺桑耶寺神山色拉寺 羊卓雍湖哲蚌寺罗布林卡古格王朝日喀则绒布寺 青海; 青海湖塔尔寺茶卡盐湖鸟岛日月山坎布拉 格尔木柴达木盆地北禅寺东关清真大寺黄河源孟达天池 倒淌河 宁夏: 沙湖西夏王陵贺兰山岩画长江源青铜峡108塔沙坡头 玉皇阁中卫高庙宏佛塔 台湾: 宝岛美景阿里山日月潭阳明山玉山太鲁阁 台北故宫板桥林家花园野柳赤嵌楼溪头秀姑峦溪 鹅銮鼻合欢山七美岛 山西: 五台山恒山平遥古城壶口瀑布乔家大院云冈石窟 王家大院北武当山晋祠悬空寺显通寺日升昌票号 广胜寺庞泉沟应县木塔南山寺善化寺 黑龙江: 大兴安岭漠河镜泊湖太阳岛吊水楼瀑布冰雪大世界 极乐寺亚布力滑雪场扎龙自然保护区圣索菲亚大教堂 甘肃; 嘉峪关莫高窟玉门关郎木寺伏羲庙麦积山石窟 炳灵寺石窟崆峒山 湖北: 三峡神农架武当山黄鹤楼归元寺葛洲坝 东湖西陵峡五道峡大九湖九畹溪香溪源 燕子垭 内蒙古: 呼伦贝尔草原成吉思汗陵阿斯哈图石林赤峰五当召响沙湾 扎兰屯锡林浩特达里诺尔湖大青沟格根塔拉草原黑里河 天津: 古文化街盘山食品街独乐寺大沽口炮台天后宫 天成寺舍利塔太平寨千像寺八卦城清真大寺蓟县白塔 新疆: 塞里木湖喀纳斯那拉提草原吐鲁番魔鬼城火焰山 交河故城高昌古城喀什博斯腾湖阿尔泰山白杨沟 博格达山楼兰卡拉库里湖罗布泊果子沟艾丁湖

相关主题
文本预览
相关文档 最新文档