Britta hummel microsoft
Twenty went with a guide and still smooth and swift to a site suitable for beginner surf for most and advanced surf for some. Convenient with village house shower and shelter if it ever rain. All in all a good Saturday trip for the sun, the sea and the surf!! And thank you for resolving all issues that came up during the trip to make this trip smooth and enjoyable for everyone. I've told quite a number of friends about this meetup and they are keen to sign up if ther I've told quite a number of friends about this meetup and they are keen to sign up if there will be another one in future.
I look forward to join another 3playgrounds event again. Here's my best regards to all 17 friends made on these trip. Thanks to Organisers Britta and Dawn! You have toiled and gone out to make us comfortable. Started together at Woodlands and finished off at Woodlands. Good spirit and Good closing dinner at JB! WIll j Read more 17 went on the trip from Singapore, sporting, friendly, helpful, easy going and likeminded people. WIll join future trips.
Licensed Travel Agent No All rights reserved. Sign in OR Register. Email Address. Remember Me. Create Account. Log in with Facebook. The method further includes detecting decalibration of the camera system. The method further includes, when the camera system is decalibrated, generating recalibration parameters based on the calibration data.
The method further includes determining whether the recalibration parameters are valid parameters and, when they are, updating the multiple camera system based on the recalibration parameters. Type: Grant. Filed: April 2, Date of Patent: May 25, Publication number: Type: Application. Share this event. Business Seminar. Save This Event Log in or sign up for Eventbrite to save events you're interested in.
Sign Up. Already have an account? Log in. Event Saved. Your message has been sent! Your email will only be seen by the event organizer. Your Name. For a type of object , a base structural model may be selected like car, chair, ball, table, human, etc. Particles are then added to the base structural model to fit the 3D shape of the specific real object and form the structural model for the specific real object. The 3D shape of the real object can be determined based on the object boundary data , like edges or a polygon mesh, determined for the object and 3D structure data determined to fit the object.
In other examples, a base structural model is not used, and particles are fitted within the 3D shape Each particle may be assigned one or more physics parameter based on the physics parameters data set N determined for the real object. For example, the total mass assigned to the object may be subdivided among the particles. The subdivision of mass need not be evenly distributed if physical properties for different parts of an object are known.
In the case of the glass table 32 , the particles modeling the glass may have a smaller or larger mass than the wood sides depending on the thickness and type of glass determined. Similarly, the particles in the wood supports have a material type of wood associated with them and a tensile strength associated with wood assigned to them while the particles for the glass table have a material type and tensile strength for glass. For virtual objects, an application can assign the types of actions performed by each object.
Some examples of actions are shoot a bullet, fly, be thrown, roll, bounce, slide, shatter, fold, crack, break, etc. For real objects, the physics engine may associate by default actions from an action simulator library based on the type of object For display of an action, the object physical properties N of the object are submitted as input parameters so the action is performed more realistically.
Like with actions, the application can assign pre-collision events for virtual objects either from the pre-collision events library or create the events and store them in the library Similarly, collision effects can be assigned by the application for a virtual object.
The physics engine may select by default pre-collision events , collision effects , and sound effects for real objects based on the type of object An application may also track one or more real objects in an environment and assign actions, pre-collision events, collision effects, sound effects and visual effects to the real objects or events, actions or collisions relating to them as well.
User input may cause the application to do so in some instances. Sound effects and visual effects may also be updated based directly or indirectly on user input.
For example, a user may have selected a stadium mode for the basketball application such that a collision of a basket scored is accompanied by a swoosh sound and crowd noise and a highlighting effect on the ball as it passes through the net of basket hoop The simulation of action or effects or both determined by the physics engine for either a real or virtual object in three dimensions is then transformed into display data usable by the virtual data engine to represent the simulation on the display.
The display data may be implemented in a markup language. Virtual data engine processes virtual objects and registers the 3D position and orientation of virtual objects or imagery in relation to one or more coordinate systems, for example in display field of view coordinates or in the view independent 3D map coordinates. The virtual data engine determines the position of image data of a virtual object or imagery in display coordinates for each display optical system Additionally, the virtual data engine performs translation, rotation, and scaling operations for display of the virtual data at the correct size and perspective.
A virtual data position may be dependent upon, a position of a corresponding object, real or virtual, to which it is registered. The virtual data engine can update the scene mapping engine about the positions of the virtual objects processed. Device data may include a unique identifier for the computer system 8 , a network address, e. Particularly for the see-through, augmented reality display device system 8 , the device data may also include data from sensors or determined from the sensors like the orientation sensors , the temperature sensor , the microphone , and the one or more location and proximity transceivers For illustrative purposes, the method embodiments below are described in the context of the system embodiments described above.
However, the method embodiments are not limited to operating in the system embodiments described above and may be implemented in other system embodiments. Furthermore, the method embodiments are continuously performed, and there may be multiple collisions between objects being processed for a current display field of view and the field of view changes as the user moves her head and real and virtual objects move as well.
A display typically has a display or frame rate which updates faster than the human eye can perceive, for example 30 frames a second. Using the software system embodiment of FIG. A 3D space is a volume of space occupied by the object. Depending on the precision desired, the 3D space can match the 3D shape of the object or be a less precise volume like a bounding shape like a bounding box or bounding ellipse around an object.
A 3D space position represents position coordinates for the perimeter of the volume or 3D space. In other words the 3D space position identifies how much space an object occupies and where in the display field of view that occupied space is. In one example, based on the parts of the objects meeting for the collision, for example hand and ball, the physics engine may identify one or more types of collisions which an application has registered with it.
For example, a dribble may be identified based on identification through motion data determined from image data of a human palm facing downward and pushing a virtual basketball based on tracking of the ball's position. The basketball application may have also registered the dribble action as a gesture, so when the gesture recognition engine notifies the basketball application of the gesture, the basketball application can notify the physics engine of the gesture as a type of collision.
In step , the physics engine determines at least one effect on at least one physical property of the real object due to the collision based on its one or more physical properties and the physical interaction characteristics for the collision. The physics engine simulates realistic behavior of objects based on forces being applied to objects.
Physical interaction characteristics describe parameters inputs for a force equation and the resultant one or more forces which are changeable for each collision. An example of a physical interaction characteristic is a speed at which each object is traveling when they meet. So is each object's velocity which is a vector quantity indicating direction of the object and its speed in that direction.
For an object like a human being, there may be different velocities for different parts of the body which may make up a composite velocity for a part making the contact in the collision. Another physical interaction characteristic is the composite mass for object or object part involved in the collision.
A basketball thrown by a stationary human receives less force than a basketball thrown by a human riding a bicycle. There may be ambient factors like wind or rain which effect velocity as well. Force vectors are determined by the physics engine on each of the objects based on their physical properties like mass and tensile strength and physical interaction characteristics like velocity and environmental forces like gravity. A frictional force may be determined based on a coefficient of friction for a surface type of material.
A spring force may be determined if a modulus of elasticity of an object satisfies criteria to be significant. A resting force may be determined based on the types of materials making up an object. The physics engine simulates the force vectors action on each structural model of the respective real and virtual objects. Besides determining resultant actions, a collision effect on each object is determined.
At least one physical property of an object is effected by a collision, but not all effects are visible to the human eye. In some embodiments, criteria is stored for each collision effect for determining whether the effect should be displayed. Some examples of criteria factors are a force strength, a force direction and a force density. What kind of effect is suitable is based on the forces involved but are also particular to the physical properties of a specific object like mass, structure as may be represented by size and shape, surface material and inner materials of an object.
An example of an effect was illustrated in FIG. For the real object a cracked glass collision effect represented by cracks 35 N was displayed along with image data of broken glass pieces 37 N. For the virtual basketball , the effect was the bottom part of the ball appearing flattened 39 as it contacted the glass table top. Due to the strength of the reactive force directed from the table and the downward force applied by the ball and the force of gravity due to its direction and speed, a contact area with the glass table is determined for the basketball.
In this example, the contact area is the spherical bottom of the ball which is modeled as deforming inward so the shape, a physical property of the basketball, was altered. From Joe's perspective, the bottom of the ball flattens As glass has poor tensile strength, the cracked pattern effect, has criteria such as a minimum strength of force being exerted over a maximum surface area and within an angular range of the surface glass top Based on the changes to the structural model determined by the physics engine , in step the virtual data engine generates image data of the real object simulating the at least one effect on the at least one physical property of the real object like the change in the surface of the glass table, and in step displays the image data of the real object registered to the real object in the display field of view.
As in the embodiment of FIG. Optionally, in step , the physics engine identifies audio data based on the at least one effect on the real object and its one or more types of material. For example, the sound library or the materials lookup table may include parameter values for affecting pitch, tone and volume of audio data based on a type of material.
Similarly, in optional step , a visual effect, e. Some examples of bases of identification may be the type of collision, a force strength associated with the collision, or a linking between the visual effect and the collision effect made by an application or in response to user input.
In step , the 3D audio engine plays the audio data and the virtual data engine , in step , displays the visual effect. Like for the real object, in step , the physics engine determines at least one effect on at least one physical property of the virtual object due to the collision based on its one or more physical properties and the physical interaction characteristics for the collision, and the virtual data engine , in step , modifies the image data of the virtual object representing the at least one effect on the at least one physical property of the virtual object and, in step , displays the modified image data of the virtual object.
The physics engine may also optionally, in step , identify audio data based on the at least one effect on the virtual object and its assigned one or more types of material, and may also, optionally in step , identify a visual effect for display with the modified image data. The identification may be based on links registered by an application between an audio effect and a collision effect and between a visual effect and a collision effect.
The 3D audio engine plays the audio data in optional step , and the virtual data engine displays the visual effect in optional step When a new user enters an environment like the living room in FIGS. In step , the scene mapping engine receives a 3D mapping of a user environment including a 3D space position and an identifier for a real object and a 3D space position and an identifier for a virtual object.
The physics engine retrieves in step a stored physics model for the real object based on the identifier for the real object, and in step retrieves a stored physics model for the virtual object based on the identifier for the virtual object. A 3D mapping identifying different objects and linking physics models to the objects may not always exist, or a physics model may not exist for an object. An example of such a situation is when a new object is in a previously mapped area, or a 3D mapping is being initially generated for an environment.
In step , the object recognition engine identifies a real object in a display field of view of a see-through, augmented reality display device based on image data of the display field of view and in step determines one or more physical properties of the real object based on the image data of the display field of view.
The physics engine is notified of the identified objects and their physical properties, and in step searches stored physics models for an existing model for representing the real object based on the one or more physical properties determined for the real object in order to determine as in step whether there exists a physics model for the real object. For example, physics models uploaded by users over time to a physics data store maintained by a software platform which supports the physics engine may be searched.
Additionally, publicly accessible and privately accessible physics engine models in a standardized format may be searched. Examples of privately accessible models are those linked to user profile data which are accessible to a service, for example a gaming service, with which a user is registered and which service platform supports the physics engine Responsive to an existing physics model being found, the physics engine in step uses the existing physics model for representing the real object.
For example, a basketball made by a certain manufacturer may have a physics engine already stored for it, and the real object is the same kind of basketball. Responsive to an existing physics model not being found, in step , the physics engine automatically generates a physics model for the real object based on the one or more physical properties of the real object.
In step , the physics engine generates and stores a structural model e. For example, as discussed previously for a particle system physics engine, particles, which can be controlled individually, can be assigned to each structure in 3D structure data for an object to fill out the 3D shape of the object. For example, particles can be associated with an arm or a leg of a human skeletal as a structure, and the particles fill in the region between the representation of a bone and object boundary data like a polygon mesh region representing the outer surface the of the arm.
In step , the physics engine associates one or more collision effects with the real object in the physics model based on the type of object and one or more types of material of the real object, and in step , the physics engine associates one or more actions for the real object with the physics model based on a type of object for the real object.
In step , the physics engine may optionally automatically store the physics model in a network accessible data store for physics models as permitted by user share permissions. Besides designating share permissions and such, user input may also indicate a change in a physical property associated with a real object.
For example, the basketball application may allow a user to select a cartoon mode or a rubber mode in which objects react unrealistically with respect to their actual physical natures. A user can also pull up a virtual menu provided by the physics engine or an application, gaze at an object, and select by gesture or voice a physical property change which is then applied to the object, be it virtual or real. The result is that the physics engine generates a version of the physics model of the glass table in which a material type of rubber with a very high modulus of elasticity is substituted for the glass.
In FIG. The bottom of virtual object is more rounded or less flat as the rubbery table top bends around the basketball unlike the absorption of energy by glass table top in FIG. Joe moves from position to position to view the path of the basketball from the table 32 , across the room, to colliding with such force as shown by position with the television 16 that the TV display not only cracks, as illustrated by cracks like representative crack 45 N, but breaks resulting in glass pieces 46 and 47 separating from the display and falling to the ground, with the assistance of a gravity force component, from the vertical surface of television display The cracking and breaking of the display is based on its physics model which is based on realistic physical properties of the TV display The applications may be executing on the same display device system 8 or on different display device systems 8 as in the example of FIGS.
Using the system embodiment of FIG. For example, user input may have selected a share mode for a display device system 8 in which virtual data generated by the device system 8 can be shared with other systems 8.
In step , identification data is communicated for the one or more shared virtual objects to one or more applications which lack control over the one or more shared virtual objects. The operating system may indicate to the applications that they were designated for share mode and set up an interprocess communication between the applications.
In step , each application sharing one or more virtual objects, which it generates and controls, provides access to a physics model for each of the one or more shared virtual objects for collision processing. For example, the physics models for the virtual objects may be in the library , but the physics engine is notified that the virtual objects are shared and can therefore make collisions.
In step , each application sharing one or more virtual objects which it controls communicates action and 3D space position data for the one or more shared virtual objects to one or more applications which lack control over the one or more shared virtual objects. The communication may be indirect, for example via the scene mapping engine and the physics engine The 3D scene mapping engine may receive an update of applications to which to report 3D space position data of the shared virtual objects and updates a 3D mapping of a user environment to include space position data of the one or more shared virtual objects.
In step , the virtual data engine displays any of the one or more shared virtual objects having a 3D space position in a current display field of view from the perspective indicated by the display field of view. The perspective indicated by the display field of view is also referred to as the user perspective.
Bob 24 has entered the living room where Joe 18 has just thrown virtual basketball 30 towards the virtual basketball net The display systems 8 2 and 4 of Bob and Joe can communicate in a share mode in which a virtual object from an application executing on either device may be seen in the see-through, augmented reality display 2 of the other user as well.
In one example, software executing on the hub computer 12 identifies Joe and Bob are present in the same user environment based on device data like a location sensor or based on their both establishing a communication session with the hub system 12 and sharing their field of view data or location data or both. In another example, Joe and Bob are friends identified in user profile data , and software executing on the hub 12 can indicate to each that the other is present in the same location based on location data or uploaded image data.
In another example, the display systems 8 themselves may be in a peer-to-peer share mode. Each user may grant permission to share their displayed data at one or more levels, e. Identification data may be transferred directly via IR transceivers or over a network, e.
The other application still controls the behavior and appearance of the virtual object, so in some instances, the application which lacks control over the virtual object may categorize the virtual object as a real object for purposes of control or interaction. For example, the basketball application may identify the pet dog 42 as a real object, and the pet dog application 42 may identify the basketball 30 as a real object.
Some other category or designation indicating lack of control may be applied in other examples as well. The physics engine simulates a collision between the dog 42 and the basketball 30 based on their physics models defined by their respective applications. The pet dog application checks the 3D mapping for a ball and identifies the virtual basketball 30 moving through the air.
In this view of the example, both Bob and Jim are looking at the virtual basketball 30 through their see-through augmented reality display devices 2 , and both see from his respective perspective Fido 42 air catch the ball in order to fetch it for Bob The pet dog application treats the virtual basketball 30 as any other ball, and determines the ball is in the air based on the 3D mapping and selects an air catch action which it has registered in the action simulator library and which the pet dog application has associated with the physics model for Fido.
The physics engine simulates the air catch action and contours Fido's body around the basketball 30 based on the type of action, air catch, and the physics models for Fido and the basketball Fido's action is reported to the basketball application which processes the action with respect to the basketball according to its own logic.
For example, the basketball application may categorize Fido's catch as a steal of the ball in Joe's profile and statistics for his session according to its rules.
The following discussion describes some example processing for updating a display to position virtual objects so that they appear realistically at 3D locations in the display determined for them by the physics engine or an application. In one example implementation of updating the 3D display, the virtual data engine renders the previously created three dimensional model of the display field of view including depth data for both virtual and real objects in a Z-buffer.
The real object boundaries in the Z-buffer act as references for where the virtual objects are to be three dimensionally positioned in the display as the image generation unit only displays the virtual objects as the display device is a see-through display device.
For a virtual object, the virtual data engine has a target space position of where to insert the virtual object. In some examples, the virtual object target position is registered to a position of a real world object, and in other examples, the virtual object is independent of a particular real object.
A depth value is stored for each display element or a subset of display elements, for example for each pixel or for a subset of pixels. Virtual images corresponding to virtual objects are rendered into the same z-buffer and the color information for the virtual images is written into a corresponding color buffer. The virtual images include any modifications to virtual image data based on collision processing.
In this embodiment, the composite image based on the z-buffer and color buffer is sent to image generation unit to be displayed at the appropriate pixels. The display update process can be performed many times per second e. As with different aspects of the methods discussed above, the different steps for updating the display may be performed solely by the see-through, augmented reality display device system 8 or collaboratively with one or more other computer systems like the hub computing systems 12 or other display device systems 8.
With reference to FIG. In its most basic configuration, computing device typically includes one or more processing units including one or more central processing units CPU and one or more graphics processing units GPU. Computing device also includes memory Depending on the exact configuration and type of computing device, memory may include volatile memory such as RAM , non-volatile memory such as ROM, flash memory, etc.
This most basic configuration is illustrated in FIG. Such additional storage is illustrated in FIG. Device may also contain communications connection s such as one or more network interfaces and transceivers that allow the device to communicate with other devices.
Device may also have input device s such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device s such as a display, speakers, printer, etc. All these devices are well known in the art and need not be discussed at length here. The example computer systems illustrated in the figures include examples of computer readable storage devices.
0コメント