Pages

Monday, April 19, 2010

Second Life for our HUMAN..........

Hi friends....
In our recent days we are seeing so many miracles that are created by our IT people..Thus they develop an another new Technology which is related to SIXTH SENSE technology by Pranav Mistry...It is Relevant to our Human Daily which helps the People who are all using INTERNET in their daily life...I think it  will be very useul and also very inerractive to the users...
                                Here i am going to explain the Technology which will ruled in upcoming  years..I am very proudly to share this news with you all...

First i need to give little Intro about the person who are in experience in Our IT field..He was very talented and experienced in IT field and  multimedia too.
        Venkat Chinniah and also known as Unniyan Gears in Second Life, A seasoned web designer coupled with knowledge and experience in multimedia, animation and graphics. 20 years experience in IT industry as a consultant and coordinator in various fields such as DTP, Multimedia, CD publishing, Web designing etc, focusing on internet concepts which have a higher level of automation resulting in self supporting applications.
       So what is Second Life?    Second Life is a three-dimensional interactive virtual world !!!


Second Life is the creation of Linden Lab, a privately held company based in San Francisco. It was launched in 2003, but took a few years to gain popularity. Users can sign up for free to establish an account. After installing a downloadable client program, you can access Second Life and select an "avatar" to represent your virtual persona. While the initial avatar selection is somewhat limited, its appearance can subsequently be customized by using a tool that allows editing of different body parts and items of clothing, allowing potentially infinite variations of avatars for representing different people. Second Life is a persistent, permanently editable, 3D online world. User uptake is growing at around 20% per month. Since its release in 2003, over six million people have downloaded the 30 MB client, registered and logged on.Noted Microsoft blogger Robert Scoble describes Second Life as “platform within a platform” within which you can “store files …build a video game …a music store… a dance studio… a city…a helicopter…or a video screen that plays whatever content you want... or a fountain that spits blood.”. Content creation, which creators own and are free to monetize, is driving the in-world economy, with spending on virtual products during May 2007 amounting to more than 1.5 million US dollars per day. If the technological challenges could be overcome, it would blur the conceptual boundaries between Second Life’s metaverse and other virtual environments, furthering the prospect of such worlds becoming interoperable, with services such as education, business and media flowing across them. It also points to a mass uptake of 3D online digital services


  R. Sivakumar, Managing Director (Sales and Marketing Group), Intel South Asia. Today, all Fortune 1,000 companies are using Second Life for promoting their products and brands. IBM and Intel have been using it for conducting virtual conferences and saving cost on travel and meetings. Many universities are conducting classes through it. In southern India, an automotive company has recently opened an office using this platform for promoting its brand and products.


Though Intel has used Second Life for virtual conference, it is a very nascent tool and will evolve in the future. However, many companies have started using this platform. For example, Time Warner Bros promotes its movie, “I am legend”, through Second Life. It has acted as a powerful medium for creating an awareness of the movie. Many international resorts are promoting their offerings through virtual media. L’Oreal has been selecting its models through this platform. IT major Wipro Technologies has set up its virtual innovation centre for testing services on Second Life in Bangalore, according to Vinayak Sharma, consultant of Anantara Solutions.

Sunday, Jun 28, 2009 The Hindu

Second Life (SL) is terabytes of information, objects and activities, almost entirely user generated. It has become a complex virtual world with its own economic and cultural practice. At the busiest times, there can be upwards of 35,000 people ‘in-world’ at the same time. They represent a diverse community ranging from curious onlookers to special interest groups, educational institutions, media companies and global corporations. Live events work really well (at least for 400 people or fewer). Think live music, focus groups, meetings, discussions, tours, debates, presentations, or watching the launch of the Space Shuttle with space enthusiasts from around the world. There was a recent "SL Best Practices in Education" conference with around 1300 registered attendees from around the world. All of these events are great opportunities to meet people with similar interests, can join groups to create COMMUNITIES.

                                 Second Life is a great boon to those with physical disabilities. One can only imagine the kind of experiences people will be able to have as the technology improves. Finally, and possibly most importantly, technologies like Second Life provide people with a chance to try out living very different lives. Avatars cross gender, race, and cultural lines, blurring the differences that can be obvious in real life interactions. The social implications of a more powerful and immersive environment are immense, and could change the way we see each other in a way that was previously unimaginable.












        

Thursday, April 8, 2010

Today's Interview - Jamshed Avari

 THE HIDDEN FACTORIES:-      

                              ODM (original design manufacturer) companies aren’t given much thought when we talk about technology brands, yet these are the ones which have the most power to decide what kind of devices the world uses. Very few of the laptop, smartphone, gadget and computer component brands in the world actually manufacture their own products today. As tech gets more and more sophisticated, companies need to keep spending money to upgrade their manufacturing facilities to stay on the leading edge. They also need to trim prices as much as possible to remain competitive in an open market where everyone uses the same basic components to build their devices. With such conflicting interests, the costs of maintaining a manufacturing facility begin to outweigh the benefits.


           ODMs allow companies to outsource manufacturing. They leverage economies of scale by producing in enormous quantities. Nearly all of them are based in China, where employees are cheap and easily replaceable (and, some allege, easy to exploit without legal trouble). They build nearly anything, to any specification, and at any level of quality. They also offer anonymity—you’ll rarely find the names Compal, Quanta, Clevo, or Inventec on these devices, yet these companies build and often design the whole product.
                   YOU’LL RARELY FIND THE NAMES COMPAL, QUANTA, CLEVO, OR INVENTEC ON YOUR DEVICES.


We’re used to thinking of the word “Chinese” as derogatory for something of such poor build quality that it’s nearly disposable. Toys with sharp edges, flimsy appliances, and even imitation watches and clothes. But nearly everything with an American or European brand name on it, (including Apple’s gorgeous plastic and aluminium machines which set benchmarks for style and production quality), comes out of an ODM factory in China. Companies can spend a lot of money and have their own designs turned into real products; others pay for exclusivity (so no one ever needs to know where their products came from), while others just pick up readymade devices by the thousand and slap their own stickers on. Apple makes it a point to label all its products as “designed in California”, which shows how well its ODM partners follow designs so that Apple’s image stays unaffected. On the other hand, small-time companies sell identical-looking laptops and even retail chains stock self-branded devices which are clones of these, either hoping no one else has the same design, or accepting this as a part of doing business.
In fact very few companies which claim they have spent money on research and development to come up with special products for Indian conditions are telling the whole truth. They might only be picking from a limited number of customization options offered by the ODMs. Surprisingly, a small number of companies thrive by picking up ODM designs and boasting that they are sourced from exactly the same production lines as big-brand devices, and sell them at lower rates!

ODMs not only control production, but are getting ambitious themselves. Watch out for new, more powerful Asian brands emerging in the very near future







                            

Coming soon 3-D TV

As those of you who've closely followed my online editorial coverage of recent years know, the booming recent interest in 3-D video content is no surprise. It didn't take the impressive success of Avatar and other 3-D movies to capture my attention, and I was following the embryonic 3-D industry long before being exposed to the diversity of hardware, software, services, and content at January's CES (Consumer Electronics Show) in Las Vegas. More simply, all it took for me to realize a few years ago that 3-D would be the next big thing was a history lesson.


Black-and-white movies began their transition in the late 1920s from silent films to “talkies.” Viewer demand, along with movie studios' desires to enhance content appeal and expand the market, drove this migration. By the end of World War II, however, black-and-white televisions were becoming commonplace in homes, and studio and theater owners alike consequently saw TV as a potential distraction to potential movie viewers' eyeballs and wallets. As such, they competitively accelerated what had previously been a somewhat-leisurely transition from black-and-white to color cinema.


The first NTSC (National Television System Committee) broadcast occurred in late 1953, with standardization at the end of that year. Seeing the writing on the wall, movie studios and directors responded by further upping the ante versus TV. Their competitive response was twofold: Wide-screen films almost immediately gained traction and quickly became commonplace once the industry resolved the disparities between competitive wide-screen technologies. Widespread use of surround sound in cinema is a more recent phenomenon, although it dates from 1940's Fantasia (Reference 1).


Flash-forward to the present, and high-definition wide-screen displays and high-fidelity surround-sound-audio systems are now pervasive in homes (Reference 2). Feeding these technologies are high-quality sound and video content, both residing locally on optical discs and transported to the living room through various wired and wireless broadcast channels (references 3 and 4). As such, the movie-theater industry has dusted off the other competitive technology it first tried out back in the 1950s: 3-D. Digital cinema is by itself insufficient to ensure continued moviegoer loyalty, in part because the benefits versus the silver-halide predecessor are mostly relevant to the theaters and studios: ease of distribution and accounting, along with no media degradation through repeated showings. But digital technology does enable more robust 3-D projection and viewing implementations than the anaglyph—that is, bicolor-lens glasses—approach allowed for. The industry introduced that approach more than a half-century ago and largely discarded it soon afterward (Reference 5).
Unfortunately for theater owners, 3-D is coming to living rooms faster than the cinema industry probably had hoped (see sidebar “Theater transformations”). Although last year's NTSC-to-ATSC (Advanced Television Systems Committee) conversion in the United States encouraged widespread consumer transitions from standard-definition CRTs (cathode-ray tubes) to high-definition LCDs (liquid-crystal displays) and plasma displays, the consumer-electronics industry's attempts to encourage an evolution in both hardware and content libraries from DVD (digital versatile disc) to Blu-ray disc were less successful (Reference 6).
More generally, the last several years' worth of economic downturn has encouraged potential purchasers to keep their wallets in their pockets, to the widespread detriment of the consumer-electronics industry, which now views 3-D as the spark that might reignite consumers' interest and acquisition habits. Ultimately, studios seem loyal only to their investors; a recent dispute over the timing of Walt Disney Co's plans to bring Tim Burton's Alice in Wonderland to DVD suggests that Disney and its peers are fundamentally motivated by profit targets, not continued partnership with any link in the historical content-distribution chain (Reference 7).


Theory, implementation

Although various companies, research laboratories, and academic institutions are investigating 3-D holographic setups like the one that the original Star Wars memorably showcased, 2-D displays will constitute the dominant means of viewing 3-D content for years to come. As such, how do you trick a viewer's eyes and ears into extracting a 3-D presumption from a flat-screen presentation? Various 3-D approaches all start from the same premise: Present perspective-corrected views of each frame of a scene to the viewer's left and right eyes, either simultaneously or sequentially and at a sufficiently high rate that the cadence is imperceptible, and then rely on the brain to stitch them together as in real life.


The devil is in the details, however. The anaglyph approach that dates from the 1950s typically relied on red and blue filters, although more modern variants use different patterns (see sidebar “Glasses alternatives”). These filters allow the brain to differentiate between right- and left-eye variants of an image within the same frame. Nearly a decade ago, EDN published an example of anaglyph 3-D and bundled paper glasses with the issue (Figure 1 and Reference 8).


Anaglyph 3-D is relatively inexpensive to implement, but it suffers from several notable shortcomings that resulted in significant consumer backlash when the industry introduced it. For one thing, the color filters substantially attenuate the amount of light reaching viewers' eyes and degrade the color gamut of the image each eye receives. Second, the technology suffers from image “bleed-through”—that is, the partial presentation of one eye's intended image to the other, and vice versa. This bleed-through distorts the 3-D presentation. Third, the glasses' dimensions are often incompatible with viewers' head sizes, eye-to-eye spacing, and distance from screen and viewing angles to the screen. These disparities can result in headaches, nausea, dizziness, and other issues.



The industry has since developed several other glasses formats, along with the no-glasses autostereoscopic display (Figure 2). One glasses format employs polarization, a variant of the anaglyph approach, with the same luminance-attenuation issue but without anaglyph's chroma-shift problems. One eye's perspective-corrected image leverages light that polarizes differently from that of the image the other eye receives. Matching polarization in the glasses' lenses passively routes the correct image to the correct eye. The perceived success of traditional polarization, as measured by the absence of image “leakage,” for example, depended highly on how straight and still viewers held their heads through the presentation. The more modern circular-polarization technique alleviates most of the orientation and immobility requirements. The other now-common glasses approach leverages an LCD shutter in each lens. This technology sequentially projects left- and right-perspective images timed to match the cadence of sequential left and right active passage and blockage of light transmission to each eye.


Passive-polarization systems in theaters can take the form of either a single projector with a precisely paced spinning polarizer disc in front of the projection lens or a sequentially timed dual-projector arrangement. Both cases require the installation of a special “silvered” screen to preserve the projected light's polarization characteristics. Conversely, with active-LCD-shutter glasses, theaters can employ a conventional screen and single-projector setup. This approach commonly uses an infrared beam that comes from the projector, bounces off the screen, and floods the audience to control the switching rate of viewers' glasses. However, active LCD glasses are substantially more expensive and bulky than their passive-polarizer counterparts, and theaters must regularly recharge their embedded batteries. All these factors necessitate their collection and cleaning before distributing them for reuse, and some consumers voice concerns about sanitation.


Display contenders

Migrate 3-D from the movie theater to the home theater, and your implementation options radically expand. Your customers could, of course, mimic a theater configuration using a single-projector or multiprojector arrangement employing DLP (digital-light projection), LCD, or LCOS (liquid-crystal-on-silicon) technology, but such setups are largely restricted to videophiles. Because modern LCD and plasma direct-view televisions switch and refresh fast enough for stutter-free playback, you can alternatively employ active-shutter glasses, timed through an infrared, RF (radio-frequency), or wired connection to the display, to synchronize with a sequentially displayed right- and left-eye-intended version of each frame. Alternatively, some LCDs leverage passive polarization, in which a polarizer filter lies directly atop each consecutive display line. This filter tailors that line's image for one of the viewer's eyes. The chief downside of the passive approach is that it halves the effective vertical resolution that each eye perceives. However, you can usually disable the polarization effect for use when viewing 2-D content or 3-D content that the display has dynamically converted to 2-D for viewing without using glasses.

Any glasses-based 3-D technology, however, has several notable shortcomings. Chief among them is the sizing concern; any mismatch between the glasses' and viewer's facial dimensions will lead to discomfort or worse. There's also the potential for breakage or misplacement and, therefore, the need for replacement of glasses, a concern to consumers but perhaps a tempting opportunity for supplier profit, particularly with the comparatively expensive active-shutter approach. Also, LCD glasses' need for periodic recharging can lead to frustration when potential customers must reschedule movie night because of the glasses' drained batteries or failure in the middle of a movie. Finally, there's the compatibility worry, a likely scenario in the early days of any new technology. Isn't it reasonable to assume that consumers will be reluctant to invest in glasses that they can't use at the homes of friends, family members, and neighbors or a 3-D technology that an upstart alternative might render obsolete?

Taking a no-glasses tack at solving the problem, a number of manufacturers have developed autostereoscopic displays. These displays incorporate lenticular lenses, parallax barriers, or other mechanisms to create depth perception from a flat projection surface. To succeed to a reasonable degree, however, autostereoscopy requires that the viewer be rigidly positioned in a “sweet spot” throughout the presentation. Even under ideal circumstances, autostereoscopy doesn't create a compelling end result. I've auditioned many autostereoscopic displays over the years, and I've never walked away even remotely impressed. Fortunately, users can switch autostereoscopic displays into 2-D mode to view conventional content. Like with other specialty-display types, such as large-screen OLEDs (organic light-emitting diodes), it's nonetheless difficult to envision that autostereoscopy can achieve sufficient early-adopter sales to appreciably reduce costs and prices for the masses.

Distribution challenges

How can you transport discrete right- and left-eye versions of each video frame's information through a wired or wireless “pipe” that was originally bandwidth-tuned for single 2-D frame transport? In short, you can't. This pragmatic reality means that trade-offs will be necessary to accomplish a 3-D presentation. Two bandwidth-slimming possibilities are to lower the per-pixel color depth or the playback frame rate. Take HDMI (high-definition multimedia interface), for example (Reference 9). Now-pervasive HDMI Version 1.3 has 340 MHz of bandwidth—more than double the single-link bandwidth of its HDMI Version 1.2 predecessor. This bandwidth speedup translates to 10.2-Gbps TMDS (transition-minimized-differential-signaling) bandwidth, or 8.16-Gbps video bandwidth, which is more meaningful to the application. This bit rate is adequate for passing a 3-D variant of a full-frame 720p30, 1080i60, or even 1080p60 video presentation at 24-bit-per-pixel color (Table 1). Higher-color-depth, higher-frame-rate, or higher-resolution 3-D video clips, however, could find HDMI 1.3 or HDMI Version 1.4, which has the same speed as its predecessor, lacking from a bandwidth standpoint. Such a demanding data payload might come, for example, from a game running on a computer or an arcade console (Reference 10).

Read more In-Depth Technical Features


Lower-bandwidth connections, such as broadband-Internet WAN (wide-area network), wired and wireless LAN (local-area network), and ATSC-broadcast beacons, have even more challenged 3-D capabilities, despite the fact that low-bit-rate, lossy compression algorithms usually find use as their video codecs (Reference 11). As such, per-frame resolution reductions are often necessary so that two frames' worth of information, corresponding to the right- and left-eye data, can squeeze into the transport space that one frame's worth of data formerly used. The resolution-reduction techniques take different approaches to optimizing the trade-off between resultant image quality and processing complexity (Figure 3).



The side-by-side scheme requires that the video processor in the display or projector subsequently horizontally expand each eye's image to fill the full frame size, whereas the “over-and-under,” “above-and-below,” or “top-and-bottom” approach requires subsequent vertical scaling. Adding support for the over-and-under approach is the impetus for the HDMI Version 1.4a specification, which is now publicly available. Alternatively, the technology can scatter the right and left eye's data across the source frame in a line-by-line or checkerboard pattern. Keep in mind that further alternation may be necessary to tailor the material to the target display technology once the video data traverses the interconnection to the destination playback device.



Rather than rapidly presenting the right and left eye's information in frame sequence and synchronizing it to active-LCD-shutter glasses, as projectors, plasma displays, and some LCD TVs do, other LCD TVs “stripe” the two eyes' data within a single display frame. Even if full-frame 3-D playback is possible over the transmission channel, the destination device may be unable to accept it without a firmware upgrade. The 192032205-pixel frame size of the full over-and-under 3-D implementation, for example, is incompatible with the EDID (extended-display-identification data) in almost all currently installed displays, as well as that in intermediary A/V (audio/video) receivers and HDMI switchboxes (Reference 12).



Capture and storage

The same data-payload constraints that may hamper transmission channels in 2-D-to-3-D conversion also potentially have an impact on the storage devices that archive the video information. Storing the left and right eyes' per-frame information in full frame would require more than double the capacity of 2-D technology. The Blu-ray disc Association last December announced its support for the 3-D video format. Ironically, 3-D may finally provide sufficient justification for Blu-ray's increased capacity over its DVD predecessor because a multilayer DVD in combination with an advanced video codec, such as H.264 or VC-1, provides sufficient capacity to hold a full-length Hollywood feature film in a 2-D, high-definition format (Reference 13).



Speaking of film, how do moviemakers create 3-D presentations? With full computer-graphics animation sequences, the process is relatively straightforward, involving rendering distinct versions of each frame's geometry data from the right and left eyes' perspectives (see sidebar “2-D conversions”). The algorithms can even sometimes run in real time on a modern graphics processor, as the glasses-plus-board retail kits employing AMD/ATI and Nvidia chips demonstrate (Figure 4).



More traditional video-image capture requires a dual-lens setup, often with dual sensors and dual storage devices. Panasonic showed such a videocamera for professional videographers at January's CES. The $21,000 AG-3DA1, which the company had previously unveiled at the April 2009 NAB (National Association of Broadcasters) show, will become available for sale this fall. Nvidia's press briefing at the same CES showcased Fujifilm's more moderately priced and featured FinePix Real 3-D W13-D videocamera (Figure 5). It's probably no surprise that large consumer-electronics companies, such as dominant Blu-ray backer Sony, have plans for 3-D-supportive still cameras and videocameras, and CES demonstrations from even entry-level imaging manufacturers suggest that the technology will rapidly and pervasively make its way into the marketplace. Some degree of initial incompatibility between capture sources, playback destinations, and intermediary devices is inevitable until industry and de facto standards take hold, but the long-term future for 3-D looks realistic.