当前位置:文档之家› Provable Strategies for Vision-Guided Exploration in Three Dimensions

Provable Strategies for Vision-Guided Exploration in Three Dimensions

Provable Strategies for Vision-Guided Exploration in Three Dimensions
Provable Strategies for Vision-Guided Exploration in Three Dimensions

Provable Strategies for Vision-Guided Exploration in Three Dimensions Kiriakos N.Kutulakos Charles R.Dyer Vladimir J.Lumelsky

Computer Sciences Department

University of Wisconsin

Madison,Wisconsin53706USA

Abstract

An approach is presented for exploring an unknown,ar-

bitrary surface in three-dimensional(3D)space by a mobile

robot.The main contributions are(1)an analysis of the ca-

pabilities a robot must possess and the trade-offs involved in

the design of an exploration strategy,and(2)two provably-

correct exploration strategies that exploit these trade-offs and

use visual sensors(e.g.,cameras and range sensors)to plan

the robot’s motion.No such analysis existed previously for

the case of a robot moving freely in3D space.The approach

exploits the notion of the occlusion boundary,i.e.,the points

separating the visible from the occluded parts of an object.

The occlusion boundary is a collection of curves that“slide”

over the surface when the robot’s position is continuously con-

trolled,inducing the visibility of surface points over which they

slide.The paths generated by our strategies force the occlu-

sion boundary to slide over the entire surface.The strategies

provide a basis for integrating motion planning and visual

sensing under a common computational framework.

1Introduction

An important,unsolved problem in robotics is the devel-

opment of strategies for real-time and provably-correct ex-

ploration of unknown environments where three-dimensional

reasoning and visual sensing is necessary.Such exploration

strategies are necessary when a mobile robot must control its

position in order to reach a desired location in an unknown

three-dimensional environment,or to produce a map of it.

The main focus of this paper is the development of strategies

for solving the object exploration task:In this task the goal

of the robot is to plan a collision-free path allowing it to

sense all points on the surface of an object in the environment.

We assume that the robot is a point and that it is equipped

with a camera or a range sensor.We assume that the robot’s

environment contains objects that are?nite volumes bounded

by closed and connected surfaces of arbitrary shape.

from every position of the robot.This result implies that visual sensors(i.e.,sensors that provide shape informa-

tion about the visible portions of object surfaces)are important for planning a robot’s motion.

In this paper we develop strategies that guarantee correctness and bounded length paths for exploring ar-bitrary geometrically-complex,three-dimensional environ-ments.When trying to perform tasks that depend on the appearance of an object(as is the case with three-dimensional exploration and path planning),provable correctness is critical (see Section2):Object appearance can drastically change de-pending on the position of the robot,making ad hoc strategies unpredictable,incomplete,and even non-terminating

To our knowledge,no strategies currently exist for solving the object exploration problem in non-polyhedral environ-ments when the robot is able to freely move in space(i.e.,has three degrees of freedom in position).In the case of polyhe-dra,the strategies developed(e.g.,[9])rely on the fact that the objects consist of a?nite collection of planar faces,and produce paths whose lengths diverge in the limit.A number of recent approaches in computer vision have been suggested for exploring the surfaces of objects[8,10–12],but their cor-rectness in arbitrary environments has not been investigated. 2Overview of our Approach

As a robot moves in its environment,the set of visible points changes.In our approach we capture and control these visibility transitions by analyzing the occlusion boundary of objects.For a given robot position,this is the collection of closed curves bounding the occluded object points(Section 3).The shape and position of these surface curves almost always depends continuously on the robot’s position:When the robot’s position is continuously controlled,these curves can be thought of as sliding on the surface.Furthermore,their topology(i.e.,connectivity)can change only at a discrete set of robot positions.If an occlusion boundary curve slides over a previously-occluded point during the robot’s motion,that point becomes visible.This allows us to formulate exploration as the task of forcing the occlusion boundary to slide over the points occluded from the robot’s initial position(Section3). Intuitively,the main dif?culty in solving the exploration problem is that although the robot has some control over the motion of the occlusion boundary over the surface,this control is not complete;the motion of the occlusion boundary also depends on the shape of the surface itself.In addition,

the

(a)(b)(c)

Figure1:Forcing the occlusion boundary curve,shown as a white circle in the?rst image,to slide over the dark curve drawn on the torus.

Figure2:Points and belong to the occlusion boundary. Only point belongs to the visible rim.

The strategies we present can be extended to handle piecewise-smooth surfaces.These surfaces are smooth except for zero-or one-dimensional sets whose points have a?nite number of normals assigned to them.De?nition3.1The visible rim,,is the set of points for which the line segment is tangent to at and does not intersect.

De?nition3.2The occlusion boundary,,bounds the visible and the occluded points.It is the set of points for which the line segment contains at least one visible rim point and does not intersect.

Clearly,the occlusion boundary is a superset of the visible rim (Figure2).A convenient way to visualize and study the occlu-sion boundary and the visible rim is to look at their spherical projections onto an image centered at the robot’s position(Fig-ure3).Their projections are identical;this projected contour is known as the occluding contour[1].

In general,the occlusion boundary is collection of closed, piecewise-smooth curves,,whose number and shape depends on the robot’s position.The endpoints of the smooth segments of these curves project to cusps or T-junctions on the occluding contour(Figure3).Given a smooth path,for the robot,the curves move on the surface in a way that depends smoothly on,except for a discrete set of values for which their topology changes.These changes occur precisely when the topology of the occluding contour changes[13–15].

Between any two consecutive topological transitions the oc-clusion boundary can be thought of as a collection of smooth curves that“slide”over the surface of the object as the robot moves,causing deformations to the occluding contour.In par-ticular,given an instantaneous direction of motion,,we can express the motion of the occlusion boundary using the epipolar parameterization.The details of this parameteriza-tion are not important here and the reader is referred to[16]. The important point is that given a smooth segment of

occlusion

boundary

T?junction

Figure3:The occluding contour.Each point on the oc-clusion boundary projects to a single point on the spherical image.For simplicity,the image of the occlusion boundary on a plane tangent to the spherical image is shown.

,this parameterization allows us to de?ne the segment of that corresponds to.

The motion of the occlusion boundary induced by the robot’s motion directly affects the set of surface points be-coming visible.When the occlusion boundary at time slides over a surface point that was occluded from positions

,that point becomes visible.The following theorem(see[17]for a proof)gives a qualitative characteri-zation of these visibility transitions(Figure4):

Theorem3.1Let and let be a point of tangency of the segment with.If is a smooth occlusion boundary segment whose points satisfy

,all points on are occluded from position. Conversely,if,all points on are visible from position.

Once a point becomes visible,it may be possible to recover its three-dimensional coordinates using a visual sensor.We consider two types of sensors:

Range sensors.With such a sensor the robot can deter-mine the coordinates of all visible points.

Cameras.With such a sensor the robot can detect the occluding contour.Furthermore,when the robot moves along a path and knows its speed and acceleration,it can determine the coordinates of all points on the visible rim from the deformation of the occluding contour[16]. The exploration problem can now be formulated as follows: De?nition3.3(Range sensor-based exploration)Given a connected surface and an initial position, generate a?nite-length path such that the occlusion bound-ary slides over all points on that were occluded at.In other words,if is the set of points occluded at and

,must satisfy. De?nition3.4(Camera-based exploration)Given a con-nected surface and an initial position,generate a?nite-length path such that if, must satisfy.

To facilitate our discussion we use the following de?nition:

De?nition3.5A point on is explored if and only if the three-dimensional coordinates of were determined by the robot from some position along its path.The boundary of the explored points is the exploration frontier.

4Incremental Exploration

Having provided a formulation for the exploration problem, this section considers a strategy for exploring an object of arbitrary shape.The idea is to control the robot’s motion so that the set of explored points incrementally“grows”on the surface.To achieve this objective,the robot performs the following actions during the-th iteration:It selects a point on the exploration frontier,moves to a position where it is visible,and?nally performs a sequence of motions forcing all points in a neighborhood of to be explored.This process terminates when the set of explored points is identical to the object’s surface,i.e.,when the exploration frontier degenerates to a collection of isolated points.

To completely specify our exploration strategy we need to answer four questions:(1)How to select point,(2)how to?nd a position where is visible,(3)how to explore a neighborhood of,and(4)how to guarantee that only a ?nite number of iterations are necessary.The crucial point in our approach is that the third and fourth questions,which lie at the heart of the exploration problem,can be answered using a local strategy,i.e.,one that does not depend on the history of the exploration process(the set of points explored and the path generated in the previous iterations).We discuss each of the four questions below.

In our approach,the point is simply chosen to be any point on the exploration frontier.The coordinates of the points comprising the frontier are available to the robot because the exploration frontier bounds the set of explored points.

The answer to the second question is also simple.If the robot’s sensor is a camera,all points on the exploration fron-tier were points on the visible rim at some previous position of the robot;if it is a range-sensor,all points on the explo-ration frontier were points on the occlusion boundary at some previous position.Hence,it suf?ces for the robot to retrace its path back to a position where the selected point was on the occlusion boundary or the visible rim.This can be achieved if,along its path,the robot saves its current position together with the set of points belonging to the visible rim (or the occlusion boundary,if a range sensor is used)at that position.This information allows the robot to(1)associate

with each exploration frontier point a set of previous positions from which that point is visible,and(2)plan a collision-free path to any such position.

Now suppose that the selected point is visible.We use a neighborhood exploration strategy to answer the third ques-tion.The only input to this strategy is the selected point. The goal is to generate a path,based on information provided by the robot’s sensor,such that an open neighborhood of becomes explored.

In order to explore the surface in a?nite amount of time, the neighborhood must be large enough so that only a ?nite number of such neighborhoods is necessary to cover the whole surface of the object.

De?nition4.1Given a class of surfaces,a neighborhood exploration strategy is complete for if and only if for all surfaces and all sequences, the sequence of explored neighborhoods is such that

for some?nite.

Equipped with a complete exploration strategy for,the robot uses the following strategy to explore a surface: Surface Exploration Strategy

Step1:If a range sensor is available,select a point on the occlusion boundary;otherwise select on the visible rim.(Initially the exploration frontier is identical to the occlusion boundary.)

Step2:Use to explore a neighborhood of.During this step,the set of points explored on the surface is expanded. Step3:Select any point on the current exploration frontier.

If no such point exists,stop;the exploration process is complete.

Step4:If a range sensor is available,move to a previous position,at which is on the occlusion boundary;

otherwise move to a previous position where is on the visible rim.

Step5:Repeat steps2-5.

The only step left unspeci?ed in the above strategy is Step2. The next two sections describe two neighborhood exploration strategies that can be used to implement this step.These strategies give rise to two different solutions to the exploration problem.The?rst strategy,,which is the most general, uses a range sensor.It is complete for the class of smooth and connected surfaces of?nite area(Section5).The second one, ,uses a camera and is complete only for the subclass of surfaces with no concavities,e.g.,the torus or,more generally, surfaces produced by appropriately deforming spheres with handles(Section6).5Exploration Using a Range Sensor

Suppose that the robot needs to explore the neighborhood of a selected occlusion boundary point on with the help of a range sensor.Strategy is based on the following observation:If the robot looks at from the direction of its surface normal and is positioned close enough to,all points in a neighborhood of must be visible.Intuitively, the reason for this is that the smoothness of around severely constrains the variation of the surface normals in the point’s neighborhood;this limits the sets of points that can be occluded when the robot is close to along.

Using this observation,strategy prescribes exactly how the robot should move to reach a position along,while at the same time guaranteeing the completeness of the strat-egy for arbitrary smooth surfaces.In particular,the strategy speci?es(1)how to reach positions along,and(2)how far away from the robot should go.

Reaching a position close to along is trivial.Since is visible from the robot’s initial position,the robot moves along a straight line towards,determines the surface normal at when is reached,and then moves along that normal. The following lemma shows that after is reached,the robot can guarantee that a neighborhood of is explored by performing an arbitrarily small position adjustment along .See[18]for the proof.

Lemma5.1Let be the set of visible points when the robot is at distance from along.Then,there is an such that contains a neighborhood of for all .

Unfortunately,an arbitrarily small motion away from can leave arbitrarily small.The robot must therefore move suf?ciently far from so that contains a neighborhood of that is large enough to guarantee completeness.In[18] we show that the robot can guarantee completeness by simply moving away from along until either a collision with the surface does not allow further motion,or an a priori de?ned distance is reached.Furthermore,the selection of does not affect the completeness of the strategy.This result follows from the smoothness of,which ensures that if the robot moves away from along,the distance traveled before a surface collision occurs cannot be arbitrarily small.

The above strategy can be improved by adding an additional stopping condition that reduces the amount of robot motion away from.This condition uses Theorem3.1and requires the robot to monitor the occlusion boundary while moving away from.In particular,during the robot’s motion away from its instantaneous velocity,,is along.The occlusion boundary curves at any time during this motion can be partitioned into open segments falling into two categories,

4t 312(b)

(a)

4

and ,whose points satisfy and ,respectively.If are the distances of and from ,respectively,the robot can stop moving when

(Figure 5).

Intuitively,this constraint allows the robot to move away from as long as the set of visible points near strictly expands.To see this,note that according to Theorem 3.1,the segments in slide over previously-occluded points on ,expanding the set of points that are visible in ’s neighbor-hood,while the segments in cause a contraction of this set.The distance determines the size of the visible neighborhood of ,and increases monotonically as long as .Given that is on the occlusion boundary from the robot’s position,,the following neighborhood exploration strat-egy can now be speci?ed:

Strategy

Step 1:Starting from

,move on a straight line to

.

Step 2:Determine the surface normal at ,

.

Step 3:Move in direction

,until one of the following

conditions is satis?ed:

a.The surface obstructs the robot’s motion.

b.An a priori de?ned distance

from

is reached.

c.

,where are the distances of and

to ,respectively.

The following theorem establishes the correctness of the strategy.The interested reader is referred to [18]for the proof.It is based on the fact that the surface is smooth and does not have in?nite curvature at .

Theorem 5.1Strategy is complete for smooth and con-nected surfaces of ?nite area.

6Exploration Using a Camera

This section considers an alternative strategy called .

The strategy has three characteristics:(1)It uses a camera instead of a range sensor,(2)it moves the robot close to the selected point only when this is dictated by the shape of the object’s surface,and (3)it is complete for a subset of the surfaces handled by strategy ,namely those that are non-concave and generic.When the robot’s sensor can provide information only about the shape of the occluding contour,

allows the exploration of almost all surfaces that can be explored with such a sensor:It is easy to see that points in a surface concavity cannot belong to the visible rim,and hence cannot become explored by any strategy using only information derived from the occluding contour.Strategy is based on the following simple observation:Suppose the robot is positioned level with a point on a hill top;then this point will belong to the visible rim.By moving up or down,the robot will see just over the hill or not quite to the top,respectively.These motions cause the visible rim to slide over a neighborhood of that point,according to Theorem 3.1.In general,this motion corresponds to motion on the plane de?ned by the selected point ,the surface normal

,and the robot’s position [17].Strategy uses this observation to explore a neighborhood of while at the same time guaranteeing that is large enough to ensure completeness.

To achieve this,consider why can be arbitrarily small.There are two possibilities:

The size of depends on the length of the visible rim curve containing and the distance of its endpoints from ,if such endpoints exist.Depending on the robot’s position,,these lengths can be arbitrarily small;they can also become arbitrarily small because of a topo-logical change in the visible rim when is in?nitesimally perturbed on .

The size of is constrained by the amount of robot

motion on .

The main idea of strategy is to exploit the following result,which ensures the length of the visible rim curve con-taining never becomes arbitrarily small:

Proposition6.1Let be a smooth,non-concave,generic surface.Given a sequence of points,,and a se-quence of positions,,from which is on the visible rim,there exists a sequence of positions,,satisfying the

following four conditions:

1.is on and is reachable from.

2.is visible from.

3.There exists an such that the length of the visible

rim curve,,containing at is larger than for all.

4.There exists an such that for all and all in?nites-

imal perturbations,,of on the plane containing

and,the distance between the endpoints of the visible rim curve containing at is greater than. Given a point,determines a set of positions,

,such that at least one of the resulting se-quences satis?es Proposition6.1(Section6.1).

Once is determined,ensuring that never diminishes is fairly simple.If is the plane de?ned by,,and, it suf?ces for the robot to move on a circle in centered at of radius.Because the robot can collide with the surface,this motion also involves moving closer to along a curve in the intersection,if such a collision occurs. Due to lack of space a description of this strategy is omitted. The interested reader is referred to[18]for the details. Given that is on the visible rim from the robot’s position, ,the following neighborhood exploration strategy can now be speci?ed:

Strategy

Step1:Starting from,plan a path to determine a set of positions on,such that for at least one,satis?es Proposition

6.1.

Step2:For each,(a)move to,and(b) starting from,move in the plane de?ned by, and to explore a neighborhood of.

Theorem6.1Strategy is complete for smooth,non-concave,generic,and connected surfaces of?nite

area.6.1Positioning the Robot on

This section brie?y describes how to generate a path,start-ing from an initial position,that allows determination of the set.The main idea in this process is that every position,,can be characterized by two geometric relationships:(1)The angle formed by segment and the asymptotes[20]and bitangents at,and(2)the distance between and.In the following we show how these two relationships can be used to determine.This essen-tially reduces the determination of to the determination of the bitangents and asymptotes at.The following provides an intuitive explanation for the selection process through an example;for formal proofs and more details see[17,18]. Suppose is on the visible rim when the robot is positioned at.The angle between segment and the asymptotes and bitangents affects the distance between and the endpoints of the visible rim curve containing it.To see this,consider the example in Figure6.By moving the robot close to the bitangent line through and(without changing its distance to),the right T-junction endpoint of the visible rim curve containing can get arbitrarily close to.To avoid this, must be such that the segment forms a large angle with both bitangent lines through.

Similarly,the distance between and affects the length of the visible rim curve containing.If the robot moves close enough to in the example of Figure6,the sphere containing will not be occluded by the other two spheres. On the other hand,moving too close to would cause the visible rim curve containing to be arbitrarily short.This implies that should be closer to than either or,but

should not be arbitrarily close to.

A generalization of the above for the case of a general smooth,non-concave and generic surface leads to the fol-lowing result.We only present it here for the case where

is elliptic;a similar result holds for the case of parabolic and hyperbolic points.

Theorem6.2Let be the bitangent half-lines with origin on.Let be the points of tangency of with for which the segments

do not intersect.Then,we can construct a set of positions that is uniquely determined by the half-lines ,the distances,and an a priori de?ned constant ,and contains at least one position for which the sequence satis?es Proposition6.1.Furthermore,the size of is at most.

Therefore,to construct,it suf?ces to determine the bitangents at and their contacts with that are visible from .This can be provably achieved by circumnavigating on,until the robot’s initial position,is reached, while maintaining the visibility of[17,18].During this process,the robot moves closer to only to maintain the visibility of,or when surface collisions prevent the robot from maintaining a?xed distance from.

7Discussion

The two strategies developed in the previous sections demonstrate that a number of trade-offs are involved when deciding on a strategy to explore an https://www.doczj.com/doc/8712941397.html,rmation about object geometry(e.g.,whether or not concavities exist)allows the robot to use simpler sensors and allows the robot to stay farther away from the object being explored.

An even more general consequence that follows from our analysis is that the robot can explore different parts of the same object in different ways.When multiple strategies are available,a crucial issue is how to decide which strategy to use.

As an example,consider how strategies and can be combined.The robot can use a camera to explore from a distance any connected,non-concave region of a surface, even though that surface may also contain concavities.This is because the convergence properties of change only when the robot attempts to explore the surface near the boundary of such a region[17]:When the robot uses to explore the surface near a parabolic curve bounding a surface concavity, the set of points explored near the parabolic curve diminishes as the parabolic curve is approached.Consequently,the robot can keep using until the rate of points explored(e.g.,the area of newly-explored points)falls below an a priori de?ned threshold.Within such a framework,the robot will have to use only when necessary,i.e.,when exploring close to concavities.An important future direction of our research will be to develop a better understanding of the interaction between these and other exploration strategies.

References

[1]J.J.Koenderink,Solid Shape.MIT Press,1990.

[2]J.H.Rieger,“On the classi?cation of views of piecewise smooth

objects,”Image and Vision Computing,vol.5,no.2,pp.91–97, 1987.

[3]J.H.Reif,“Complexity of the mover’s problem and gener-

alizations,”in Proc.Twentieth Symposium on Foundations of Computer Science,pp.421–427,1979.

[4]R.A.Brooks,“A robust layered control system for a mobile

robot,”IEEE J.Robotics Automat.,vol.2,no.1,pp.14–23, 1986.

[5]Y.Aloimonos,“Purposive and qualitative active vision,”in

Proc.Int.Conf.on Pattern Recognition,pp.346–360,1990.

[6]V.J.Lumelsky,S.Mukhopadhyay,and K.Sun,“Dynamic

path planning in sensor-based terrain acquisition,”IEEE Trans.

Robotics Automat.,vol.6,no.4,pp.462–472,1990.

[7]K.N.Kutulakos,V.J.Lumelsky,and C.R.Dyer,“Vision-

guided exploration:A step toward general motion planning in three dimensions,”in Proc.IEEE Robotics Automat.Conf., pp.289–296,1993.

[8] A.Blake,A.Zisserman,and R.Cipolla,“Visual exploration

of free-space,”in Active Vision(A.Blake and A.Yuille,eds.), pp.175–188,MIT Press,1992.

[9]N.S.Rao,S.S.Iyengar,B.J.Oommen,and R.Kashyap,“Ter-

rain acquisition by point robot amidst polyhedral obstacles,”

in Proc.Third Conf.on Arti?cial Intelligence Applications, pp.170–175,1987.

[10] C.I.Connoly,“The determination of next best views,”in Proc.

IEEE Robotics Automat.Conf.,pp.432–435,1985.

[11]J.Maver and R.Bajcsy,“Occlusions as a guide for planning the

next view,”IEEE Trans.Pattern Anal.Machine Intell.,vol.15, no.5,pp.417–433,1993.

[12]P.Whaite and F.P.Ferrie,“From uncertainty to visual explo-

ration,”IEEE Trans.Pattern Anal.Machine Intell.,vol.13, no.10,pp.1038–1049,1991.

[13]J.J.Koenderink and A.J.van Doorn,“The singularities of the

visual mapping,”Biological Cybernetics,vol.24,pp.51–59, 1976.

[14]J.Ponce,S.Petitjean,and D.Kriegman,“Computing exact

aspect graphs of curved objects:Algebraic surfaces,”in Proc.

Second European Conference on Computer Vision,1992. [15]J.H.Rieger,“The geometry of view space of opaque objects

bounded by smooth surfaces,”Arti?cial Intelligence,vol.44, pp.1–40,1990.

[16]R.Cipolla and A.Blake,“Surface shape from the deformation

of apparent contours,”https://www.doczj.com/doc/8712941397.html,puter Vision,vol.9,no.2, pp.83–112,1992.

[17]K.N.Kutulakos and C.R.Dyer,“Global surface reconstruction

by purposive control of observer motion,”Tech.Rep.1141, Computer Sciences Department,University of Wisconsin—Madison,April1993.Available via ftp from https://www.doczj.com/doc/8712941397.html,.

[18]K.N.Kutulakos,V.J.Lumelsky,and C.R.Dyer,“Prov-

able strategies for vision-guided exploration in three dimen-sions,”tech.rep.,Computer Sciences Department,University of Wisconsin—Madison,1994.To appear.

[19]Y.L.Kergosien,“La famille des projections orthogonales d’une

surface et ses singularit′e s,”C.R.Acad.Sc.Paris,vol.292, pp.929–932,1981.

[20]M.P.D.Carmo,Differential Geometry of Curves and Surfaces.

Englewood Cliffs,NJ:Prentice-Hall Inc.,1976.

Provable Strategies for Vision-Guided Exploration in Three Dimensions Kiriakos N.Kutulakos Charles R.Dyer Vladimir J.Lumelsky

Computer Sciences Department

University of Wisconsin

Madison,Wisconsin53706USA

Abstract

An approach is presented for exploring an unknown,ar-

bitrary surface in three-dimensional(3D)space by a mobile

robot.The main contributions are(1)an analysis of the ca-

pabilities a robot must possess and the trade-offs involved in

the design of an exploration strategy,and(2)two provably-

correct exploration strategies that exploit these trade-offs and

use visual sensors(e.g.,cameras and range sensors)to plan

the robot’s motion.No such analysis existed previously for

the case of a robot moving freely in3D space.The approach

exploits the notion of the occlusion boundary,i.e.,the points

separating the visible from the occluded parts of an object.

The occlusion boundary is a collection of curves that“slide”

over the surface when the robot’s position is continuously con-

trolled,inducing the visibility of surface points over which they

slide.The paths generated by our strategies force the occlu-

sion boundary to slide over the entire surface.The strategies

provide a basis for integrating motion planning and visual

sensing under a common computational framework.

1Introduction

An important,unsolved problem in robotics is the devel-

opment of strategies for real-time and provably-correct ex-

ploration of unknown environments where three-dimensional

reasoning and visual sensing is necessary.Such exploration

strategies are necessary when a mobile robot must control its

position in order to reach a desired location in an unknown

three-dimensional environment,or to produce a map of it.

The main focus of this paper is the development of strategies

for solving the object exploration task:In this task the goal

of the robot is to plan a collision-free path allowing it to

sense all points on the surface of an object in the environment.

We assume that the robot is a point and that it is equipped

with a camera or a range sensor.We assume that the robot’s

environment contains objects that are?nite volumes bounded

by closed and connected surfaces of arbitrary shape.

from every position of the robot.This result implies that visual sensors(i.e.,sensors that provide shape informa-

tion about the visible portions of object surfaces)are important for planning a robot’s motion.

In this paper we develop strategies that guarantee correctness and bounded length paths for exploring ar-bitrary geometrically-complex,three-dimensional environ-ments.When trying to perform tasks that depend on the appearance of an object(as is the case with three-dimensional exploration and path planning),provable correctness is critical (see Section2):Object appearance can drastically change de-pending on the position of the robot,making ad hoc strategies unpredictable,incomplete,and even non-terminating

To our knowledge,no strategies currently exist for solving the object exploration problem in non-polyhedral environ-ments when the robot is able to freely move in space(i.e.,has three degrees of freedom in position).In the case of polyhe-dra,the strategies developed(e.g.,[9])rely on the fact that the objects consist of a?nite collection of planar faces,and produce paths whose lengths diverge in the limit.A number of recent approaches in computer vision have been suggested for exploring the surfaces of objects[8,10–12],but their cor-rectness in arbitrary environments has not been investigated. 2Overview of our Approach

As a robot moves in its environment,the set of visible points changes.In our approach we capture and control these visibility transitions by analyzing the occlusion boundary of objects.For a given robot position,this is the collection of closed curves bounding the occluded object points(Section 3).The shape and position of these surface curves almost always depends continuously on the robot’s position:When the robot’s position is continuously controlled,these curves can be thought of as sliding on the surface.Furthermore,their topology(i.e.,connectivity)can change only at a discrete set of robot positions.If an occlusion boundary curve slides over a previously-occluded point during the robot’s motion,that point becomes visible.This allows us to formulate exploration as the task of forcing the occlusion boundary to slide over the points occluded from the robot’s initial position(Section3). Intuitively,the main dif?culty in solving the exploration problem is that although the robot has some control over the motion of the occlusion boundary over the surface,this control is not complete;the motion of the occlusion boundary also depends on the shape of the surface itself.In addition,

the

(a)(b)(c)

Figure1:Forcing the occlusion boundary curve,shown as a white circle in the?rst image,to slide over the dark curve drawn on the torus.

Figure2:Points and belong to the occlusion boundary. Only point belongs to the visible rim.

The strategies we present can be extended to handle piecewise-smooth surfaces.These surfaces are smooth except for zero-or one-dimensional sets whose points have a?nite number of normals assigned to them.De?nition3.1The visible rim,,is the set of points for which the line segment is tangent to at and does not intersect.

De?nition3.2The occlusion boundary,,bounds the visible and the occluded points.It is the set of points for which the line segment contains at least one visible rim point and does not intersect.

Clearly,the occlusion boundary is a superset of the visible rim (Figure2).A convenient way to visualize and study the occlu-sion boundary and the visible rim is to look at their spherical projections onto an image centered at the robot’s position(Fig-ure3).Their projections are identical;this projected contour is known as the occluding contour[1].

In general,the occlusion boundary is collection of closed, piecewise-smooth curves,,whose number and shape depends on the robot’s position.The endpoints of the smooth segments of these curves project to cusps or T-junctions on the occluding contour(Figure3).Given a smooth path,for the robot,the curves move on the surface in a way that depends smoothly on,except for a discrete set of values for which their topology changes.These changes occur precisely when the topology of the occluding contour changes[13–15].

Between any two consecutive topological transitions the oc-clusion boundary can be thought of as a collection of smooth curves that“slide”over the surface of the object as the robot moves,causing deformations to the occluding contour.In par-ticular,given an instantaneous direction of motion,,we can express the motion of the occlusion boundary using the epipolar parameterization.The details of this parameteriza-tion are not important here and the reader is referred to[16]. The important point is that given a smooth segment of

occlusion

boundary

T?junction

Figure3:The occluding contour.Each point on the oc-clusion boundary projects to a single point on the spherical image.For simplicity,the image of the occlusion boundary on a plane tangent to the spherical image is shown.

,this parameterization allows us to de?ne the segment of that corresponds to.

The motion of the occlusion boundary induced by the robot’s motion directly affects the set of surface points be-coming visible.When the occlusion boundary at time slides over a surface point that was occluded from positions

,that point becomes visible.The following theorem(see[17]for a proof)gives a qualitative characteri-zation of these visibility transitions(Figure4):

Theorem3.1Let and let be a point of tangency of the segment with.If is a smooth occlusion boundary segment whose points satisfy

,all points on are occluded from position. Conversely,if,all points on are visible from position.

Once a point becomes visible,it may be possible to recover its three-dimensional coordinates using a visual sensor.We consider two types of sensors:

Range sensors.With such a sensor the robot can deter-mine the coordinates of all visible points.

Cameras.With such a sensor the robot can detect the occluding contour.Furthermore,when the robot moves along a path and knows its speed and acceleration,it can determine the coordinates of all points on the visible rim from the deformation of the occluding contour[16]. The exploration problem can now be formulated as follows: De?nition3.3(Range sensor-based exploration)Given a connected surface and an initial position, generate a?nite-length path such that the occlusion bound-ary slides over all points on that were occluded at.In other words,if is the set of points occluded at and

,must satisfy. De?nition3.4(Camera-based exploration)Given a con-nected surface and an initial position,generate a?nite-length path such that if, must satisfy.

To facilitate our discussion we use the following de?nition:

De?nition3.5A point on is explored if and only if the three-dimensional coordinates of were determined by the robot from some position along its path.The boundary of the explored points is the exploration frontier.

4Incremental Exploration

Having provided a formulation for the exploration problem, this section considers a strategy for exploring an object of arbitrary shape.The idea is to control the robot’s motion so that the set of explored points incrementally“grows”on the surface.To achieve this objective,the robot performs the following actions during the-th iteration:It selects a point on the exploration frontier,moves to a position where it is visible,and?nally performs a sequence of motions forcing all points in a neighborhood of to be explored.This process terminates when the set of explored points is identical to the object’s surface,i.e.,when the exploration frontier degenerates to a collection of isolated points.

To completely specify our exploration strategy we need to answer four questions:(1)How to select point,(2)how to?nd a position where is visible,(3)how to explore a neighborhood of,and(4)how to guarantee that only a ?nite number of iterations are necessary.The crucial point in our approach is that the third and fourth questions,which lie at the heart of the exploration problem,can be answered using a local strategy,i.e.,one that does not depend on the history of the exploration process(the set of points explored and the path generated in the previous iterations).We discuss each of the four questions below.

In our approach,the point is simply chosen to be any point on the exploration frontier.The coordinates of the points comprising the frontier are available to the robot because the exploration frontier bounds the set of explored points.

The answer to the second question is also simple.If the robot’s sensor is a camera,all points on the exploration fron-tier were points on the visible rim at some previous position of the robot;if it is a range-sensor,all points on the explo-ration frontier were points on the occlusion boundary at some previous position.Hence,it suf?ces for the robot to retrace its path back to a position where the selected point was on the occlusion boundary or the visible rim.This can be achieved if,along its path,the robot saves its current position together with the set of points belonging to the visible rim (or the occlusion boundary,if a range sensor is used)at that position.This information allows the robot to(1)associate

with each exploration frontier point a set of previous positions from which that point is visible,and(2)plan a collision-free path to any such position.

Now suppose that the selected point is visible.We use a neighborhood exploration strategy to answer the third ques-tion.The only input to this strategy is the selected point. The goal is to generate a path,based on information provided by the robot’s sensor,such that an open neighborhood of becomes explored.

In order to explore the surface in a?nite amount of time, the neighborhood must be large enough so that only a ?nite number of such neighborhoods is necessary to cover the whole surface of the object.

De?nition4.1Given a class of surfaces,a neighborhood exploration strategy is complete for if and only if for all surfaces and all sequences, the sequence of explored neighborhoods is such that

for some?nite.

Equipped with a complete exploration strategy for,the robot uses the following strategy to explore a surface: Surface Exploration Strategy

Step1:If a range sensor is available,select a point on the occlusion boundary;otherwise select on the visible rim.(Initially the exploration frontier is identical to the occlusion boundary.)

Step2:Use to explore a neighborhood of.During this step,the set of points explored on the surface is expanded. Step3:Select any point on the current exploration frontier.

If no such point exists,stop;the exploration process is complete.

Step4:If a range sensor is available,move to a previous position,at which is on the occlusion boundary;

otherwise move to a previous position where is on the visible rim.

Step5:Repeat steps2-5.

The only step left unspeci?ed in the above strategy is Step2. The next two sections describe two neighborhood exploration strategies that can be used to implement this step.These strategies give rise to two different solutions to the exploration problem.The?rst strategy,,which is the most general, uses a range sensor.It is complete for the class of smooth and connected surfaces of?nite area(Section5).The second one, ,uses a camera and is complete only for the subclass of surfaces with no concavities,e.g.,the torus or,more generally, surfaces produced by appropriately deforming spheres with handles(Section6).

5Exploration Using a Range Sensor

Suppose that the robot needs to explore the neighborhood of a selected occlusion boundary point on with the help of a range sensor.Strategy is based on the following observation:If the robot looks at from the direction of its surface normal and is positioned close enough to,all points in a neighborhood of must be visible.Intuitively, the reason for this is that the smoothness of around

severely constrains the variation of the surface normals in the point’s neighborhood;this limits the sets of points that can be occluded when the robot is close to along.

Using this observation,strategy prescribes exactly how the robot should move to reach a position along,while at the same time guaranteeing the completeness of the strat-egy for arbitrary smooth surfaces.In particular,the strategy speci?es(1)how to reach positions along,and(2)how far away from the robot should go.

Reaching a position close to along is trivial.Since

is visible from the robot’s initial position,the robot moves along a straight line towards,determines the surface normal at when is reached,and then moves along that normal.

The following lemma shows that after is reached,the robot can guarantee that a neighborhood of is explored by performing an arbitrarily small position adjustment along .See[18]for the proof.

Lemma5.1Let be the set of visible points when the robot is at distance from along.Then,there is an such that contains a neighborhood of for all .

Unfortunately,an arbitrarily small motion away from can leave arbitrarily small.The robot must therefore move suf?ciently far from so that contains a neighborhood of that is large enough to guarantee completeness.In[18] we show that the robot can guarantee completeness by simply moving away from along until either a collision with the surface does not allow further motion,or an a priori de?ned distance is reached.Furthermore,the selection of does not affect the completeness of the strategy.This result follows from the smoothness of,which ensures that if the robot moves away from along,the distance traveled before a surface collision occurs cannot be arbitrarily small.

The above strategy can be improved by adding an additional stopping condition that reduces the amount of robot motion away from.This condition uses Theorem3.1and requires the robot to monitor the occlusion boundary while moving away from.In particular,during the robot’s motion away from its instantaneous velocity,,is along.The occlusion boundary curves at any time during this motion can be partitioned into open segments falling into two categories, 5

4t 312(b)

(a)

4

and ,whose points satisfy and ,respectively.If are the distances of and from ,respectively,the robot can stop moving when

(Figure 5).

Intuitively,this constraint allows the robot to move away from as long as the set of visible points near strictly expands.To see this,note that according to Theorem 3.1,the segments in slide over previously-occluded points on ,expanding the set of points that are visible in ’s neighbor-hood,while the segments in cause a contraction of this set.The distance determines the size of the visible neighborhood of ,and increases monotonically as long as .Given that is on the occlusion boundary from the robot’s position,,the following neighborhood exploration strat-egy can now be speci?ed:

Strategy

Step 1:Starting from

,move on a straight line to

.

Step 2:Determine the surface normal at ,

.

Step 3:Move in direction

,until one of the following

conditions is satis?ed:

a.The surface obstructs the robot’s motion.

b.An a priori de?ned distance

from

is reached.

c.

,where are the distances of and

to ,respectively.

The following theorem establishes the correctness of the strategy.The interested reader is referred to [18]for the proof.It is based on the fact that the surface is smooth and does not have in?nite curvature at .

Theorem 5.1Strategy is complete for smooth and con-nected surfaces of ?nite area.

6Exploration Using a Camera

This section considers an alternative strategy called .

The strategy has three characteristics:(1)It uses a camera instead of a range sensor,(2)it moves the robot close to the selected point only when this is dictated by the shape of the object’s surface,and (3)it is complete for a subset of the surfaces handled by strategy ,namely those that are non-concave and generic.When the robot’s sensor can provide information only about the shape of the occluding contour,

allows the exploration of almost all surfaces that can be explored with such a sensor:It is easy to see that points in a surface concavity cannot belong to the visible rim,and hence cannot become explored by any strategy using only information derived from the occluding contour.Strategy is based on the following simple observation:Suppose the robot is positioned level with a point on a hill top;then this point will belong to the visible rim.By moving up or down,the robot will see just over the hill or not quite to the top,respectively.These motions cause the visible rim to slide over a neighborhood of that point,according to Theorem 3.1.In general,this motion corresponds to motion on the plane de?ned by the selected point ,the surface normal

,and the robot’s position [17].Strategy uses this observation to explore a neighborhood of while at the same time guaranteeing that is large enough to ensure completeness.

To achieve this,consider why can be arbitrarily small.There are two possibilities:

The size of depends on the length of the visible rim curve containing and the distance of its endpoints from ,if such endpoints exist.Depending on the robot’s position,,these lengths can be arbitrarily small;they can also become arbitrarily small because of a topo-logical change in the visible rim when is in?nitesimally perturbed on .

The size of is constrained by the amount of robot

motion on .

The main idea of strategy is to exploit the following result,which ensures the length of the visible rim curve con-taining never becomes arbitrarily small:

Proposition6.1Let be a smooth,non-concave,generic surface.Given a sequence of points,,and a se-quence of positions,,from which is on the visible rim,there exists a sequence of positions,,satisfying the

following four conditions:

1.is on and is reachable from.

2.is visible from.

3.There exists an such that the length of the visible

rim curve,,containing at is larger than for all.

4.There exists an such that for all and all in?nites-

imal perturbations,,of on the plane containing

and,the distance between the endpoints of the visible rim curve containing at is greater than. Given a point,determines a set of positions,

,such that at least one of the resulting se-quences satis?es Proposition6.1(Section6.1).

Once is determined,ensuring that never diminishes is fairly simple.If is the plane de?ned by,,and, it suf?ces for the robot to move on a circle in centered at of radius.Because the robot can collide with the surface,this motion also involves moving closer to along a curve in the intersection,if such a collision occurs. Due to lack of space a description of this strategy is omitted. The interested reader is referred to[18]for the details. Given that is on the visible rim from the robot’s position, ,the following neighborhood exploration strategy can now be speci?ed:

Strategy

Step1:Starting from,plan a path to determine a set of positions on,such that for at least one,satis?es Proposition

6.1.

Step2:For each,(a)move to,and(b) starting from,move in the plane de?ned by, and to explore a neighborhood of.

Theorem6.1Strategy is complete for smooth,non-concave,generic,and connected surfaces of?nite

area.6.1Positioning the Robot on

This section brie?y describes how to generate a path,start-ing from an initial position,that allows determination of the set.The main idea in this process is that every position,,can be characterized by two geometric relationships:(1)The angle formed by segment and the asymptotes[20]and bitangents at,and(2)the distance between and.In the following we show how these two relationships can be used to determine.This essen-tially reduces the determination of to the determination of the bitangents and asymptotes at.The following provides an intuitive explanation for the selection process through an example;for formal proofs and more details see[17,18]. Suppose is on the visible rim when the robot is positioned at.The angle between segment and the asymptotes and bitangents affects the distance between and the endpoints of the visible rim curve containing it.To see this,consider the example in Figure6.By moving the robot close to the bitangent line through and(without changing its distance to),the right T-junction endpoint of the visible rim curve containing can get arbitrarily close to.To avoid this, must be such that the segment forms a large angle with both bitangent lines through.

Similarly,the distance between and affects the length of the visible rim curve containing.If the robot moves close enough to in the example of Figure6,the sphere containing will not be occluded by the other two spheres. On the other hand,moving too close to would cause the visible rim curve containing to be arbitrarily short.This implies that should be closer to than either or,but

should not be arbitrarily close to.

A generalization of the above for the case of a general smooth,non-concave and generic surface leads to the fol-lowing result.We only present it here for the case where

is elliptic;a similar result holds for the case of parabolic and hyperbolic points.

Theorem6.2Let be the bitangent half-lines with origin on.Let be the points of tangency of with for which the segments

do not intersect.Then,we can construct a set of positions that is uniquely determined by the half-lines ,the distances,and an a priori de?ned constant ,and contains at least one position for which the sequence satis?es Proposition6.1.Furthermore,the size of is at most.

Therefore,to construct,it suf?ces to determine the bitangents at and their contacts with that are visible from .This can be provably achieved by circumnavigating on,until the robot’s initial position,is reached, while maintaining the visibility of[17,18].During this process,the robot moves closer to only to maintain the visibility of,or when surface collisions prevent the robot from maintaining a?xed distance from.

7Discussion

The two strategies developed in the previous sections demonstrate that a number of trade-offs are involved when deciding on a strategy to explore an https://www.doczj.com/doc/8712941397.html,rmation about object geometry(e.g.,whether or not concavities exist)allows the robot to use simpler sensors and allows the robot to stay farther away from the object being explored.

An even more general consequence that follows from our analysis is that the robot can explore different parts of the same object in different ways.When multiple strategies are available,a crucial issue is how to decide which strategy to use.

As an example,consider how strategies and can be combined.The robot can use a camera to explore from a distance any connected,non-concave region of a surface, even though that surface may also contain concavities.This is because the convergence properties of change only when the robot attempts to explore the surface near the boundary of such a region[17]:When the robot uses to explore the surface near a parabolic curve bounding a surface concavity, the set of points explored near the parabolic curve diminishes as the parabolic curve is approached.Consequently,the robot can keep using until the rate of points explored(e.g.,the area of newly-explored points)falls below an a priori de?ned threshold.Within such a framework,the robot will have to use only when necessary,i.e.,when exploring close to concavities.An important future direction of our research will be to develop a better understanding of the interaction between these and other exploration strategies.

References

[1]J.J.Koenderink,Solid Shape.MIT Press,1990.

[2]J.H.Rieger,“On the classi?cation of views of piecewise smooth

objects,”Image and Vision Computing,vol.5,no.2,pp.91–97, 1987.

[3]J.H.Reif,“Complexity of the mover’s problem and gener-

alizations,”in Proc.Twentieth Symposium on Foundations of Computer Science,pp.421–427,1979.

[4]R.A.Brooks,“A robust layered control system for a mobile

robot,”IEEE J.Robotics Automat.,vol.2,no.1,pp.14–23, 1986.

[5]Y.Aloimonos,“Purposive and qualitative active vision,”in

Proc.Int.Conf.on Pattern Recognition,pp.346–360,1990.

[6]V.J.Lumelsky,S.Mukhopadhyay,and K.Sun,“Dynamic

path planning in sensor-based terrain acquisition,”IEEE Trans.

Robotics Automat.,vol.6,no.4,pp.462–472,1990.

[7]K.N.Kutulakos,V.J.Lumelsky,and C.R.Dyer,“Vision-

guided exploration:A step toward general motion planning in three dimensions,”in Proc.IEEE Robotics Automat.Conf., pp.289–296,1993.

[8] A.Blake,A.Zisserman,and R.Cipolla,“Visual exploration

of free-space,”in Active Vision(A.Blake and A.Yuille,eds.), pp.175–188,MIT Press,1992.

[9]N.S.Rao,S.S.Iyengar,B.J.Oommen,and R.Kashyap,“Ter-

rain acquisition by point robot amidst polyhedral obstacles,”

in Proc.Third Conf.on Arti?cial Intelligence Applications, pp.170–175,1987.

[10] C.I.Connoly,“The determination of next best views,”in Proc.

IEEE Robotics Automat.Conf.,pp.432–435,1985.

[11]J.Maver and R.Bajcsy,“Occlusions as a guide for planning the

next view,”IEEE Trans.Pattern Anal.Machine Intell.,vol.15, no.5,pp.417–433,1993.

[12]P.Whaite and F.P.Ferrie,“From uncertainty to visual explo-

ration,”IEEE Trans.Pattern Anal.Machine Intell.,vol.13, no.10,pp.1038–1049,1991.

[13]J.J.Koenderink and A.J.van Doorn,“The singularities of the

visual mapping,”Biological Cybernetics,vol.24,pp.51–59, 1976.

[14]J.Ponce,S.Petitjean,and D.Kriegman,“Computing exact

aspect graphs of curved objects:Algebraic surfaces,”in Proc.

Second European Conference on Computer Vision,1992. [15]J.H.Rieger,“The geometry of view space of opaque objects

bounded by smooth surfaces,”Arti?cial Intelligence,vol.44, pp.1–40,1990.

[16]R.Cipolla and A.Blake,“Surface shape from the deformation

of apparent contours,”https://www.doczj.com/doc/8712941397.html,puter Vision,vol.9,no.2, pp.83–112,1992.

[17]K.N.Kutulakos and C.R.Dyer,“Global surface reconstruction

by purposive control of observer motion,”Tech.Rep.1141, Computer Sciences Department,University of Wisconsin—Madison,April1993.Available via ftp from https://www.doczj.com/doc/8712941397.html,.

[18]K.N.Kutulakos,V.J.Lumelsky,and C.R.Dyer,“Prov-

able strategies for vision-guided exploration in three dimen-sions,”tech.rep.,Computer Sciences Department,University of Wisconsin—Madison,1993.To appear.

[19]Y.L.Kergosien,“La famille des projections orthogonales d’une

surface et ses singularit′e s,”C.R.Acad.Sc.Paris,vol.292, pp.929–932,1981.

[20]M.P.D.Carmo,Differential Geometry of Curves and Surfaces.

Englewood Cliffs,NJ:Prentice-Hall Inc.,1976.

[批处理]计算时间差的函数etime

[批处理]计算时间差的函数etime 计算时间差的函数etime 收藏 https://www.doczj.com/doc/8712941397.html,/thread-4701-1-1.html 这个是脚本代码[保存为etime.bat放在当前路径下即可:免费内容: :etime <begin_time> <end_time> <return> rem 所测试任务的执行时间不超过1天// 骨瘦如柴版setlocal&set be=%~1:%~2&set cc=(%%d-%%a)*360000+(1%%e-1%%b)*6000+1%%f-1% %c&set dy=-8640000 for /f "delims=: tokens=1-6" %%a in ("%be:.=%")do endlocal&set/a %3=%cc%,%3+=%dy%*("%3>> 31")&exit/b ---------------------------------------------------------------------------------------------------------------------------------------- 计算两个时间点差的函数批处理etime 今天兴趣大法思考了好多bat的问题,以至于通宵 在论坛逛看到有个求时间差的"函数"被打搅调用地方不少(大都是测试代码执行效率的) 免费内容: :time0

::计算时间差(封装) @echo off&setlocal&set /a n=0&rem code 随风@https://www.doczj.com/doc/8712941397.html, for /f "tokens=1-8 delims=.: " %%a in ("%~1:%~2") do ( set /a n+=10%%a%%100*360000+10%%b%%100*6000+10%% c%%100*100+10%%d%%100 set /a n-=10%%e%%100*360000+10%%f%%100*6000+10%%g %%100*100+10%%h%%100) set /a s=n/360000,n=n%%360000,f=n/6000,n=n%%6000,m=n/1 00,n=n%%100 set "ok=%s% 小时%f% 分钟%m% 秒%n% 毫秒" endlocal&set %~3=%ok:-=%&goto :EOF 这个代码的算法是统一找时间点凌晨0:00:00.00然后计算任何一个时间点到凌晨的时间差(单位跑秒) 然后任意两个时间点求时间差就是他们相对凌晨时间点的时间数的差 对09这样的非法8进制数的处理用到了一些技巧,还有两个时间参数不分先后顺序,可全可点, 但是这个代码一行是可以省去的(既然是常被人掉用自然体

延时子程序计算方法

学习MCS-51单片机,如果用软件延时实现时钟,会接触到如下形式的延时子程序:delay:mov R5,#data1 d1:mov R6,#data2 d2:mov R7,#data3 d3:djnz R7,d3 djnz R6,d2 djnz R5,d1 Ret 其精确延时时间公式:t=(2*R5*R6*R7+3*R5*R6+3*R5+3)*T (“*”表示乘法,T表示一个机器周期的时间)近似延时时间公式:t=2*R5*R6*R7 *T 假如data1,data2,data3分别为50,40,248,并假定单片机晶振为12M,一个机器周期为10-6S,则10分钟后,时钟超前量超过1.11秒,24小时后时钟超前159.876秒(约2分40秒)。这都是data1,data2,data3三个数字造成的,精度比较差,建议C描述。

上表中e=-1的行(共11行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=999,999 e=1的行(共2行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=1,000,001 假如单片机晶振为12M,一个机器周期为10-6S,若要得到精确的延时一秒的子程序,则可以在之程序的Ret返回指令之前加一个机器周期为1的指令(比如nop指令), data1,data2,data3选择e=-1的行。比如选择第一个e=-1行,则精确的延时一秒的子程序可以写成: delay:mov R5,#167 d1:mov R6,#171 d2:mov R7,#16 d3:djnz R7,d3 djnz R6,d2

djnz R5,d1 nop ;注意不要遗漏这一句 Ret 附: #include"iostReam.h" #include"math.h" int x=1,y=1,z=1,a,b,c,d,e(999989),f(0),g(0),i,j,k; void main() { foR(i=1;i<255;i++) { foR(j=1;j<255;j++) { foR(k=1;k<255;k++) { d=x*y*z*2+3*x*y+3*x+3-1000000; if(d==-1) { e=d;a=x;b=y;c=z; f++; cout<<"e="<

用c++编写计算日期的函数

14.1 分解与抽象 人类解决复杂问题采用的主要策略是“分而治之”,也就是对问题进行分解,然后分别解决各个子问题。著名的计算机科学家Parnas认为,巧妙的分解系统可以有效地系统的状态空间,降低软件系统的复杂性所带来的影响。对于复杂的软件系统,可以逐个将它分解为越来越小的组成部分,直至不能分解为止。这样在小的分解层次上,人就很容易理解并实现了。当所有小的问题解决完毕,整个大的系统也就解决完毕了。 在分解过程中会分解出很多类似的小问题,他们的解决方式是一样的,因而可以把这些小问题,抽象出来,只需要给出一个实现即可,凡是需要用到该问题时直接使用即可。 案例日期运算 给定日期由年、月、日(三个整数,年的取值在1970-2050之间)组成,完成以下功能: (1)判断给定日期的合法性; (2)计算两个日期相差的天数; (3)计算一个日期加上一个整数后对应的日期; (4)计算一个日期减去一个整数后对应的日期; (5)计算一个日期是星期几。 针对这个问题,很自然想到本例分解为5个模块,如图14.1所示。 图14.1日期计算功能分解图 仔细分析每一个模块的功能的具体流程: 1. 判断给定日期的合法性: 首先判断给定年份是否位于1970到2050之间。然后判断给定月份是否在1到12之间。最后判定日的合法性。判定日的合法性与月份有关,还涉及到闰年问题。当月份为1、3、5、7、8、10、12时,日的有效范围为1到31;当月份为4、6、9、11时,日的有效范围为1到30;当月份为2时,若年为闰年,日的有效范围为1到29;当月份为2时,若年不为闰年,日的有效范围为1到28。

图14.2日期合法性判定盒图 判断日期合法性要要用到判断年份是否为闰年,在图14.2中并未给出实现方法,在图14.3中给出。 图14.3闰年判定盒图 2. 计算两个日期相差的天数 计算日期A (yearA 、monthA 、dayA )和日期B (yearB 、monthB 、dayB )相差天数,假定A 小于B 并且A 和B 不在同一年份,很自然想到把天数分成3段: 2.1 A 日期到A 所在年份12月31日的天数; 2.2 A 之后到B 之前的整年的天数(A 、B 相邻年份这部分没有); 2.3 B 日期所在年份1月1日到B 日期的天数。 A 日期 A 日期12月31日 B 日期 B 日期1月1日 整年部分 整年部分 图14.4日期差分段计算图 若A 小于B 并且A 和B 在同一年份,直接在年内计算。 2.1和2.3都是计算年内的一段时间,并且涉及到闰年问题。2.2计算整年比较容易,但

Excel中如何计算日期差

Excel中如何计算日期差: ----Excel中最便利的工作表函数之一——Datedif名不见经传,但却十分好用。Datedif能返回任意两个日期之间相差的时间,并能以年、月或天数的形式表示。您可以用它来计算发货单到期的时间,还可以用它来进行2000年的倒计时。 ----Excel中的Datedif函数带有3个参数,其格式如下: ----=Datedif(start_date,end_date,units) ----start_date和end_date参数可以是日期或者是代表日期的变量,而units则是1到2个字符长度的字符串,用以说明返回日期差的形式(见表1)。图1是使用Datedif函数的一个例子,第2行的值就表明这两个日期之间相差1年又14天。units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数

表1units参数的类型及其含义 图1可以通过键入3个带有不同参数的Datedif公式来计算日期的差。units的参数类型 ----图中:单元格Ex为公式“=Datedif(Cx,Dx,“y”)”得到的结果(x=2,3,4......下同) ----Fx为公式“=Datedif(Cx,Dx,“ym”)”得到的结果 ----Gx为公式“=Datedif(Cx,Dx,“md”)”得到的结果 现在要求两个日期之间相差多少分钟,units参数是什么呢? 晕,分钟你不能用天数乘小时再乘分钟吗? units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数 假设你的数据从A2和B2开始,在C2里输入下面公式,然后拖拉复制。 =IF(TEXT(A2,"h:mm:ss")

单片机C延时时间怎样计算

C程序中可使用不同类型的变量来进行延时设计。经实验测试,使用unsigned char类型具有比unsigned int更优化的代码,在使用时 应该使用unsigned char作为延时变量。以某晶振为12MHz的单片 机为例,晶振为12M H z即一个机器周期为1u s。一. 500ms延时子程序 程序: void delay500ms(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 计算分析: 程序共有三层循环 一层循环n:R5*2 = 81*2 = 162us DJNZ 2us 二层循环m:R6*(n+3) = 202*165 = 33330us DJNZ 2us + R5赋值 1us = 3us 三层循环: R7*(m+3) = 15*33333 = 499995us DJNZ 2us + R6赋值 1us = 3us

循环外: 5us 子程序调用 2us + 子程序返回2us + R7赋值 1us = 5us 延时总时间 = 三层循环 + 循环外 = 499995+5 = 500000us =500ms 计算公式:延时时间=[(2*R5+3)*R6+3]*R7+5 二. 200ms延时子程序 程序: void delay200ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=132;j>0;j--) for(k=150;k>0;k--); } 三. 10ms延时子程序 程序: void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--);

51单片机延时时间计算和延时程序设计

一、关于单片机周期的几个概念 ●时钟周期 时钟周期也称为振荡周期,定义为时钟脉冲的倒数(可以这样来理解,时钟周期就是单片机外接晶振的倒数,例如12MHz的晶振,它的时间周期就是1/12 us),是计算机中最基本的、最小的时间单位。 在一个时钟周期内,CPU仅完成一个最基本的动作。 ●机器周期 完成一个基本操作所需要的时间称为机器周期。 以51为例,晶振12M,时钟周期(晶振周期)就是(1/12)μs,一个机器周期包 执行一条指令所需要的时间,一般由若干个机器周期组成。指令不同,所需的机器周期也不同。 对于一些简单的的单字节指令,在取指令周期中,指令取出到指令寄存器后,立即译码执行,不再需要其它的机器周期。对于一些比较复杂的指令,例如转移指令、乘法指令,则需要两个或者两个以上的机器周期。 1.指令含义 DJNZ:减1条件转移指令 这是一组把减1与条件转移两种功能结合在一起的指令,共2条。 DJNZ Rn,rel ;Rn←(Rn)-1 ;若(Rn)=0,则PC←(PC)+2 ;顺序执行 ;若(Rn)≠0,则PC←(PC)+2+rel,转移到rel所在位置DJNZ direct,rel ;direct←(direct)-1 ;若(direct)= 0,则PC←(PC)+3;顺序执行 ;若(direct)≠0,则PC←(PC)+3+rel,转移到rel 所在位置 2.DJNZ Rn,rel指令详解 例:

MOV R7,#5 DEL:DJNZ R7,DEL; rel在本例中指标号DEL 1.单层循环 由上例可知,当Rn赋值为几,循环就执行几次,上例执行5次,因此本例执行的机器周期个数=1(MOV R7,#5)+2(DJNZ R7,DEL)×5=11,以12MHz的晶振为例,执行时间(延时时间)=机器周期个数×1μs=11μs,当设定立即数为0时,循环程序最多执行256次,即延时时间最多256μs。 2.双层循环 1)格式: DELL:MOV R7,#bb DELL1:MOV R6,#aa DELL2:DJNZ R6,DELL2; rel在本句中指标号DELL2 DJNZ R7,DELL1; rel在本句中指标号DELL1 注意:循环的格式,写错很容易变成死循环,格式中的Rn和标号可随意指定。 2)执行过程

excel中计算日期差工龄生日等方法

excel中计算日期差工龄生日等方法 方法1:在A1单元格输入前面的日期,比如“2004-10-10”,在A2单元格输入后面的日期,如“2005-6-7”。接着单击A3单元格,输入公式“=DATEDIF(A1,A2,"d")”。然后按下回车键,那么立刻就会得到两者的天数差“240”。 提示:公式中的A1和A2分别代表前后两个日期,顺序是不可以颠倒的。此外,DATEDIF 函数是Excel中一个隐藏函数,在函数向导中看不到它,但这并不影响我们的使用。 方法2:任意选择一个单元格,输入公式“="2004-10-10"-"2005-6-7"”,然后按下回车键,我们可以立即计算出结果。 计算工作时间——工龄—— 假如日期数据在D2单元格。 =DA TEDIF(D2,TODAY(),"y")+1 注意:工龄两头算,所以加“1”。 如果精确到“天”—— =DA TEDIF(D2,TODAY(),"y")&"年"&DATEDIF(D2,TODAY(),"ym")&"月"&DATEDIF(D2,TODAY(),"md")&"日" 二、计算2003-7-617:05到2006-7-713:50分之间相差了多少天、多少个小时多少分钟 假定原数据分别在A1和B1单元格,将计算结果分别放在C1、D1和E1单元格。 C1单元格公式如下: =ROUND(B1-A1,0) D1单元格公式如下: =(B1-A1)*24 E1单元格公式如下: =(B1-A1)*24*60 注意:A1和B1单元格格式要设为日期,C1、D1和E1单元格格式要设为常规. 三、计算生日,假设b2为生日

=datedif(B2,today(),"y") DA TEDIF函数,除Excel2000中在帮助文档有描述外,其他版本的Excel在帮助文档中都没有说明,并且在所有版本的函数向导中也都找不到此函数。但该函数在电子表格中确实存在,并且用来计算两个日期之间的天数、月数或年数很方便。微软称,提供此函数是为了与Lotus1-2-3兼容。 该函数的用法为“DA TEDIF(Start_date,End_date,Unit)”,其中Start_date为一个日期,它代表时间段内的第一个日期或起始日期。End_date为一个日期,它代表时间段内的最后一个日期或结束日期。Unit为所需信息的返回类型。 “Y”为时间段中的整年数,“M”为时间段中的整月数,“D”时间段中的天数。“MD”为Start_date与End_date日期中天数的差,可忽略日期中的月和年。“YM”为Start_date与End_date日期中月数的差,可忽略日期中的日和年。“YD”为Start_date与End_date日期中天数的差,可忽略日期中的年。比如,B2单元格中存放的是出生日期(输入年月日时,用斜线或短横线隔开),在C2单元格中输入“=datedif(B2,today(),"y")”(C2单元格的格式为常规),按回车键后,C2单元格中的数值就是计算后的年龄。此函数在计算时,只有在两日期相差满12个月,才算为一年,假如生日是2004年2月27日,今天是2005年2月28日,用此函数计算的年龄则为0岁,这样算出的年龄其实是最公平的。 本篇文章来源于:实例教程网(https://www.doczj.com/doc/8712941397.html,) 原文链接:https://www.doczj.com/doc/8712941397.html,/bgruanjian/excel/631.html

延时计算

t=n*(分频/f) t:是你所需的延时时间 f:是你的系统时钟(SYSCLK) n:是你所求,用于设计延时函数的 程序如下: void myDelay30s() reentrant { unsigned inti,k; for(i=0;i<4000;i++) /*系统时钟我用的是24.576MHZ,分频是12分频,达到大约10s延时*/ for(k=0;k<8000;k++); } //n=i*k |评论 2012-2-18 20:03 47okey|十四级 debu(g调试),左侧有运行时间。在你要测试的延时子函数外设一断点,全速运行到此断点。记下时间,再单步运行一步,跳到下一步。再看左侧的运行时间,将这时间减去上一个时间,就是延时子函数的延时时间了。不知能不能上图。 追问 在delayms处设置断点,那么对应的汇编语言LCALL是否被执行呢?还有,问问您,在C8051F020单片机中,MOV指令都是多少指令周期呢?我在KEIL下仿真得出的结果,与我通过相应的汇编语言分析的时间,总是差了很多。 回答 C编译时,编译器都要先变成汇编。只想知道延时时间,汇编的你可以不去理会。只要看运行时间就好了。 at8051单片机12m晶振下,机器周期为1us,而c8051 2m晶振下为1us。keil 调试里频率默认为24m,你要设好晶振频率。

|评论 2012-2-23 11:17 kingranran|一级 参考C8051单片机内部计时器的工作模式,选用合适的计时器进行中断,可获得较高精度的延时 |评论 2012-2-29 20:56 衣鱼ccd1000|一级 要是精确延时的话就要用定时器,但定的时间不能太长,长了就要设一个变量累加来实现了; 要是不要求精确的话就用嵌套for函数延时,比较简单,但是程序复杂了就会增添不稳定因素,所以不推荐。 |评论

DATEDIF计算年月日函数

DATEDIF(start_date,end_date,unit) Start_date 为一个日期,它代表时间段内的第一个日期或起始日期。 End_date 为一个日期,它代表时间段内的最后一个日期或结束日期。 Unit 为所需信息的返回类型。 Unit 返回 "Y" 时间段中的整年数。 "M" 时间段中的整月数。 "D" 时间段中的天数。 "MD" start_date 与end_date 日期中天数的差。忽略日期中的月和年。"YM" start_date 与end_date 日期中月数的差。忽略日期中的日和年。"YD" start_date 与end_date 日期中天数的差。忽略日期中的年。 实例1: 题目:计算出生日期为1973-4-1人的年龄 公式:=DATEDIF("1973-4-1",TODAY(),"Y") 结果:33 简要说明当单位代码为"Y"时,计算结果是两个日期间隔的年数. 实例2: 题目:计算日期为1973-4-1和当前日期的间隔月份数. 公式:=DATEDIF("1973-4-1",TODAY(),"M") 结果:403 简要说明当单位代码为"M"时,计算结果是两个日期间隔的月份数. 实例3: 题目:计算日期为1973-4-1和当前日期的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"D") 结果:12273 简要说明当单位代码为"D"时,计算结果是两个日期间隔的天数. 实例4: 题目:计算日期为1973-4-1和当前日期的不计年数的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"YD") 结果:220 简要说明当单位代码为"YD"时,计算结果是两个日期间隔的天数.忽略年数差 实例5: 题目:计算日期为1973-4-1和当前日期的不计月份和年份的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"MD") 结果:6 简要说明当单位代码为"MD"时,计算结果是两个日期间隔的天数.忽略年数和月份之差 5、实例6:

在excel中计算日期差

在excel中计算日期差、工龄、生日 领 1:在A1单元格输入前面的日期,比如“2004-10-10”,在A2单元格输入后面的日期,如“2005-6-7”。接着单击A3单元格,输入公式“=DATEDIF(A1,A2,"d")”。然后按下回车键,那么立刻就会得到两者的天数差“240”。 提示:公式中的A1和A2分别代表前后两个日期,顺序是不可以颠倒的。此外,DATEDIF函数是Excel中一个潜藏函数,在函数向导中看不到它,但这并不影响我们的运用。 留心 :A1和A2单元格格式要设为日期,公式单元格格式要设为常规 要领 2:任意选择一个单元格,输入公式“="2004-10-10"-"2005-6-7"”,然后按下回车键,我们可以立即计算出结果。 一、计算工作时间、工龄 假如日期数据在D2单元格。 =DATEDIF(D2,TODAY(),"y")+1 留心:工龄两头算,所以加“1”。 如果精确到“天”—— =DATEDIF(D2,TODAY(),"y")&"年"&DATEDIF(D2,TODAY(),"ym")&"月 "&DATEDIF(D2,TODAY(),"md")&"日" 二、计算2003-7-617:05到2006-7-713:50分之间相差了多少天、多少个小时多少分钟 假定原数据分别在A1和B1单元格,将计算结果分别放在C1、D1和E1单元格。 C1单元格公式如下:=ROUND(B1-A1,0)

D1单元格公式如下:=(B1-A1)*24 E1单元格公式如下:=(B1-A1)*24*60 留心 :A1和B1单元格格式要设为日期,C1、D1和E1单元格格式要设为常规. 三、计算生日, 假设b2为生日=datedif(B2,today(),"y") DATEDIF函数,除Excel2000中在帮助文档有描述外,其他版本的Excel在帮助文档中都没有说明,并且在所有版本的函数向导中也都找不到此函数。但该函数在电子表格中确实存在,并且用来计算两个日期之间的天数、月数或年数很方便。微软称,提供此函数是为了与Lotus1-2-3兼容。 该函数的用法为“DATEDIF(Start_date,End_date,Unit)”,其中Start_date 为一个日期,它代表时间段内的第一个日期或起始日期。End_date为一个日期,它代表时间段内的最后一个日期或结束日期。Unit为所需信息的返回类型。 “Y”为时间段中的整年数,“M”为时间段中的整月数,“D”时间段中的天数。“MD”为Start_date与End_date日期中天数的差,可忽略日期中的月和年。“YM”为Start_date与End_date日期中月数的差,可忽略日期中的日和年。“YD”为Start_date与 End_date日期中天数的差,可忽略日期中的年。 比如,B2单元格中存放的是出生日期(输入年月日时,用斜线或短横线隔开),在C2单元格中输入“=datedif(B2,today(),"y")”(C2单元格的格式为常规),按回车键后,C2单元格中的数值就是计算后的年龄。此函数在计算时,只有在两日期相差满12个月,才算为一年,假如生日是2004年2月27日,今天是2005年2月28日,用此函数计算的年龄则为0岁,这样算出的年龄其实是最公平的。 在Excel中快速计算一个人的年龄 Excel中的DATEDIF() 函数可以计算两单元格之间的年、月或日数。因此,这个函数使得计算一个人的年龄变得容易了。在一个空白工作表中的A1单元里输入生日,用斜线或减号分隔年、月和日,在A2单元中输入

单片机延时计算

单片机C51延时时间怎样计算? C程序中可使用不同类型的变量来进行延时设计。经实验测试,使用unsigned char类型具有比unsigned int 更优化的代码,在使用时应该使用unsigned char作为延时变量。以某晶振为12MHz的单片机为例,晶振为12MHz即一个机器周期为1us。 void delay__ms(void) //x,y,z位固定值,故不能接受参数 { unsigned char i,j,k; for(i=x;i>0;i--) for(j=y;j>0;j--) for(k=z;k>0;k--); } 【Delay_Time=[(2z+3)*y+3]*x+5】 一. 500ms延时子程序 程序: void delay500ms(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 计算分析: 程序共有三层循环 一层循环n:R5*2 = 81*2 = 162us DJNZ 2us 二层循环m:R6*(n+3) = 202*165 = 33330us DJNZ 2us + R5赋值 1us = 3us 三层循环: R7*(m+3) = 15*33333 = 499995us DJNZ 2us + R6赋值 1us = 3us

循环外: 5us 子程序调用 2us + 子程序返回 2us + R7赋值 1us = 5us 延时总时间 = 三层循环 + 循环外 = 499995+5 = 500000us =500ms 计算公式:延时时间=[(2*R5+3)*R6+3]*R7+5 二. 200ms延时子程序 程序: void delay200ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=132;j>0;j--) for(k=150;k>0;k--); } 三. 10ms延时子程序 程序: void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--); } 四. 1s延时子程序

EXCEL计算两个日期之间相差的年数和月数

EXCEL计算两个日期之间相差的年数和月数 有这个函数的 1、简要说明:返回两个日期之间的年\月\日间隔数 2、基本语法:=DATEDIF(开始日期,结束日期,单位代码) 3、实例1: 题目:计算出生日期为1973-4-1人的年龄 公式:=DATEDIF("1973-4-1",TODAY(),"Y") 结果:33 简要说明当单位代码为"Y"时,计算结果是两个日期间隔的年数. 4、实例2: 题目:计算日期为1973-4-1和当前日期的间隔月份数. 公式:=DATEDIF("1973-4-1",TODAY(),"M") 结果:403 简要说明当单位代码为"M"时,计算结果是两个日期间隔的月份数. 5、实例3: 题目:计算日期为1973-4-1和当前日期的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"D") 结果:12273 简要说明当单位代码为"D"时,计算结果是两个日期间隔的天数. 5、实例4: 题目:计算日期为1973-4-1和当前日期的不计年数的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"YD") 结果:220

简要说明当单位代码为"YD"时,计算结果是两个日期间隔的天数.忽略年数差 5、实例5: 题目:计算日期为1973-4-1和当前日期的不计月份和年份的间隔天数. 公式:=DATEDIF("1973-4-1",TODAY(),"MD") 结果:6 简要说明当单位代码为"MD"时,计算结果是两个日期间隔的天数.忽略年数和月份之差 5、实例6: 题目:计算日期为1973-4-1和当前日期的不计年份的间隔月份数. 公式:=DATEDIF("1973-4-1",TODAY(),"YM") 结果:7 简要说明当单位代码为"YM"时,计算结果是两个日期间隔的月份数.不计相差年数

EXCEL计算两个日期之间天数的函数

EXCEL计算两个日期之间天数的函数 来源:发布时间:2009-07-21 [评论Error! Invalid Template Key.条] 语法 DATEDIF(start_date,end_date,unit) Start_date 为一个日期,它代表时间段内的第一个日期或起始日期。日期有多种输入方法:带引号的文本串(例如"2001/1/30")、系列数(例如,如果使用1900 日期系统则36921 代表2001 年 1 月30 日)或其他公式或函数的结果(例如,DATEVALUE("2001/1/30"))。有关日期系列数的详细信息,请参阅NOW。 End_date 为一个日期,它代表时间段内的最后一个日期或结束日期。 Unit 为所需信息的返回类型。 Unit 返回 "Y" 时间段中的整年数。 "M" 时间段中的整月数。 "D" 时间段中的天数。 "MD" start_date 与end_date 日期中天数的差。忽略日期中的月和年。 "YM" start_date 与end_date 日期中月数的差。忽略日期中的日和年。 "YD" start_date 与end_date 日期中天数的差。忽略日期中的年。 说明 Microsoft Excel 按顺序的系列数保存日期,这样就可以对其进行计算。如果工作簿使用1900 日期系统,则Excel 会将1900 年1 月1 日保存为系列数1。而如果工作簿使用1904 日期系统,则Excel 会将1904 年1 月1 日保存为系列数0,(而将1904 年1 月2 日保存为系列数1)。例如,在1900 日期系统中Excel 将1998 年 1 月 1 日保存为系列数35796,因为该日期距离1900 年 1 月 1 日为35795 天。请查阅Microsoft Excel 如何存储日期和时间。 Excel for Windows 和Excel for Macintosh 使用不同的默认日期系统。有关详细信息,请参阅NOW。

延时时间测量

现代会议室的音响系统为室内所有人员提供最佳的语言清晰度。通常,我们需要建立主扬声器和辅助扬声器。也许,有人有过这样的经验:演讲者在在前面演讲,但是我们听到的声音却是从旁边的扬声器中传来的,因此视觉和听觉的感受是不匹配的。但是实现这种正面的定向声音是相当棘手的。 XL2音频与声学分析仪提供了有益的解决方案,可以很容易的的实现这种实际应用功能。本应用指南描述了一些实际范例。 延迟时间量测 XL2 优化增强声音效果

基本条件 传播速度或速度因子是一种描述电气或无线电波信号在介质中传 播快慢的参量。电气音频信号在缆线中以光速传播,速度大小为 300000km/s。音速是描述声波在空气中传播快慢的物理量,音 速在不同介质中的速度是不同的,同种介质的属性不同音速也不 一样的,尤其受温度影响尤为巨大。 在海平面上,温度为15 °C (59 °F) ,正常的大气条件下音速为340 m/s. 为什么会有延迟发生呢? 举个例子:在一个很大的厅堂内,当一个电气信号在100米的缆线 中传输时大约有0.003微秒的延迟而当它在空气中传输相同距离时 延迟大约有290毫秒。这个差值我们叫做“传输延迟”。而在实际 应用中我们一般将在缆线中的传输时间忽略不计。 增强声音的挑战 在一个比较大的厅堂内,不能保证所有的位置上都有足够的信噪 比让人耳的听觉系统接收到信息。因为在低信噪比的情形下语言 清晰度会衰减的很多,并且声音的能量会以两倍距离衰减 6dB 的 速率减少,因此许多会议室、厅堂需要安装扩声系统。 但不幸得是,并不是说安置一些扬声器和缆线就可以轻易地解决 这些问题的,为什么呢? 因为增强扬声器更接近于听众的耳朵,所以听众所听到的大部分 声音由它们提供。因此,听众的直接感觉是演讲者在扬声器位置 上。自然声源和扬声器的发出的声音不一致,让听众觉得很不自 然。 此外,由于自然前波的传播延迟,听众感觉到的扬声器的声音就 像回响一样,这进一步增加了听众感觉的不愉快并降低了语言清 晰度。 在这里,我们将哈斯效应(Haas)考虑进去,这将有助于我们明 白并解决这些问题。

Oracle计算时间差函数

Oracle计算时间差函数 两个Date类型字段 START_DATE END_DATE 计算这两个日期的时间差 (分别以天 小时 分钟 秒 毫秒) 天 ROUND(TO_NUMBER(END_DATE-START_DATE)) 小时 ROUND(TO_NUMBER(END_DATE-START_DATE)*24) 分钟 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60) 秒 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60*60) 毫秒 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60*60*1000) Oracle计算时间差函数2008-08-20 10 00两个Date类型字段 START_DATE END_DATE 计算这两个日期的时间差(分别以天 小时 分钟 秒 毫秒) 天 ROUND(TO_NUMBER(END_DATE-START_DATE))小时 ROUND(TO_NUMBER(END_DATE-START_DATE)*24)分钟 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60)秒 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60*60)毫秒 ROUND(TO_NUMBER(END_DATE-START_DATE)*24*60*60*1000) 外加to_date与to_char函数 ORACLE中 select to_date('2007-06-28 19 51 20','yyyy-MM-dd HH24 mi

C语言延时计算公式

c语言延时计算公式 分类:Work Log|标签:单片机应用 2007-09-1322:35阅读(?)评论(0) 今天在看单片机C编程的时候,突然想到在许多的编程中使用软件延时的方法,进行单片机的延时控制。但是不是很清楚为会么如下的方法能够做到准确的延时。程序如下: void delay(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 使用Keil uVersion2进行编译产生了汇编程序看了一下,原来在循环中使用的跳转语句是DJNZ。结合Rn及直接地址进行操作,DJNZ的指令机器周期为2us(晶振为12MHZ时)。上面一段程序的汇编语句如下: C:0x00137F0F MOV R7,#0x0F;1us ;----------------- C:0x00157ECA MOV R6,#0xCA;1us ;----------------- C:0x00177D51MOV R5,#0x51;1us C:0x0019DDFE DJNZ R5,C:0019;2us for(k=81;k>0;k--) C:0x001B DEFA DJNZ R6,C:0017;2us for(j=202;j>0;j--) ;------------------ C:0x001D DFF6DJNZ R7,C:0015;2us for(i=15;i>0;i--) 其中(;)及其后面的内容是为了方便说明加上去了。 现在我们来计算这段代码执行时所需的时间。 for(k=81;k>0;k--)这一个循环所需时间为:1+2×81 执行完for(j=202;j>0;j--)这一个循环的时间为:1+(1+2×81+2)×202执行完三个循环的总时间为:1+〔1+(1+2×81+2)×202+2〕×15 最后加上调用子程序的时间2us和返回的时间2us,整个延时程序的时间就是 1+〔1+(1+2×81+2)×202+2〕×15+2+2= 〔3+(3+2×81)×202〕×15+5=500000us=500ms

php计算日期差

php中如何计算时间、日期差 在php中计算时间差有时候是件麻烦的事!不过只要你掌握了日期时间函数的用法那这些也就变的简单了: 一个简单的例子就是计算借书的天数,这需要php根据每天的日期进行计算,下面就来谈谈实现这种日期计算的几种方法: (1) 如果有数据库就很容易了!若是MSSQL可以使用触发器!用专门计算日期差的函数datediff()便可! 若是MYSQL那就用两个日期字段的差值计算的计算结果保存在另一个数值型字段中!用时调用便可! (2)如果没有数据库,那就得完全用php的时间日期函数!下面主要说明之: 例:计算1998年5月3日到1999-6-5的天数:

若mktime()中的参数缺省,那表示使用当前日期,这样便可计算从借书日期至今的天数. php计算日期差天数 "; //今天到2008年9月9日还有多少天 $Date_1=date("Y-m-d"); $Date_2="2008-09-09"; $d1=strtotime($Date_1); $d2=strtotime($Date_2); $Days=round(($d2-$d1)/3600/24); Echo "今天到2008年9月9日还有".$Days."天"; ?>

利用datediff函数来计算两个时间差

利用datediff函数来计算两个时间差 描述返回两个日期之间的时间间隔。 语法 DateDiff(interval, date1, date2 [,firstdayofweek[, firstweekofyear]]) DateDiff 函数的语法有以下参数:参数描述 interval 必选。String expression 表示用于计算date1 和date2 之间的时间间隔。有关数值,请参阅“设置”部分。date1, date2 必选。日期表达式。用于计算的两个日期。firstdayofweek 可选。指定星期中第一天的常数。如果没有指定,则默认为星期日。有关数值,请参阅“设置”部分。firstweekofyear 可选。指定一年中第一周的常数。如果没有指定,则默认为 1 月1 日所在的星期。有关数值,请参阅“设置”部分。设置 interval 参数可以有以下值: 设置描述 yyyy 年 q 季度 n 月 y 一年的日数 d 日

w 一周的日数 ww 周 h 小时 m 分钟 s 秒firstdayofweek 参数可以有以下值:常数值描述vbUseSystem 0 使用区域语言支持(NLS) API 设置。vbSunday 1 星期日(默认) vbMonday 2 星期一 vbTuesday 3 星期二 vbWednesday 4 星期三 vbThursday 5 星期四 vbFriday 6 星期五 vbSaturday 7 星期六firstweekofyear 参数可以有以下值:常数值描述 vbUseSystem 0 使用区域语言支持(NLS) API 设置。vbFirstJan1 1 由1 月1 日所在的星期开始(默认)。vbFirstFourDays 2 由在新年中至少有四天的第一周开始。vbFirstFullWeek 3 由在新的一年中第一个完整的周开始。说明 DateDiff 函数用于判断在两个日期之间存在的指定时间间隔的数目。例如可以使用DateDiff 计算两个日期相差的天数,或者当天到当年最后一天之间的星期数。

相关主题
相关文档 最新文档