Touch The Screen On Both Sides Now Multitouch
technology is receiving a lot of press these days. With Apple’s iPhone
blazing the trail, devices that let users go from simply using one
finger to press a button or launch a program to using two or more
fingers to drag, rotate, and resize on-screen objects may soon send
user interfaces that utilize a mouse or other pointing device the way
of the dodo.
An early design drawing of LucidTouch shows how a user will be able to
utilize multitouch and interact with a program from behind the screen. | But
a problem arises when the consumer’s desire for ever smaller devices
comes into direct conflict with his desire for a cool multitouch screen
interface. The smaller the screen, the more of it your fingers cover up
during use. And because your fingers are finger-sized—not pixel-sized—it can be difficult to touch the screen precisely where a program is expecting input. Researchers
at Microsoft and MERL (Mitsubishi Electric Research Lab) have developed
a possible solution: LucidTouch, a two-sided touchscreen device that
uses the back of the screen for multitouch input while
overlaying a semitransparent image of a user’s fingers so that he can
see where he’s touching the screen as he interacts with a program. “It’s
often said that direct-touch is more ‘natural’ than indirect input
devices,” explains MERL researcher Daniel Wigdor. “This may be true,
but this naturalness comes at the high cost of occlusion and lack of
precision. Pseudo-transparency is a lightweight, intuitive
way of recouping some of that cost while still maintaining the
affordances of a direct-touch device.” The team MacGyvered a
prototype of the system by gluing a multitouch pad to the back of a
commercial single-input touchscreen. Then, in order to capture an image
of a user’s hands, they attached a Web cam to the back of the device
using a long boom. Finally, software running on the computer to which
the device was attached flipped the image and dropped out the
background, creating the pseudo-transparent effect. In tests, users
were able to type from behind on a split keyboard, browse a map to find
a specific location, and drag and drop objects. According to
Microsoft researcher Patrick Baudisch, the next step in LucidTouch’s
development will be to replace the external Web cam with an integrated
video-capture device that can sense the location of a user’s fingers
and create the pseudo-transparent effect. Some possibilities for
achieving this include a capacitive array, LED sensors, or stereo
cameras embedded in the device’s body. Here Comes The Spider-Man . . . Suit In
last month’s “Under Development,” we joked that a new carbon
nanocomposite material might be used to create X-ray glasses. It does
seem that carbon nanotubes are being touted as the miracle material of
the future, promising advances in chip technology, super-strong
building materials, and clothing that can not only serve as body armor
but also help humans climb walls, a la Spider-Man.
Using the gecko as inspiration, scientists hope to design boots and
gloves that could allow humans to stick to and climb vertical surfaces. | This
time, the comic book reference is no joke. Nicola Pugno, a structural
engineering professor at the Polytechnic University in Turin, Italy,
has published a paper that outlines how carbon nanotubes could be used
to create super-adhesive boots and gloves capable of supporting a
human’s weight. The inspiration for Pugno’s research came
from the way that geckos and spiders are able to scamper up and down
vertical surfaces, even very smooth surfaces such as glass, without
slipping or sliding down. Geckos stick because they have thousands of
tiny fibers on their feet that create adhesion through capillary action
(due to a thin layer of water between their feet and the surface) and
van der Waals force (a bond between close molecules). A spider climbing
a web relies on a nano-interlocking of fibers. In order to
duplicate these adhesive effects on a human scale, Pugno theorizes that
a material could be made from millions of carbon nanotubes woven into
1cm-thick threads. At the end of each thread, the nanotubes would fan
out to provide millions of points of contact between the material and
the surface, creating enough adhesive force to support the weight of a
150-pound man. Although the suit is currently only a theory,
Pugno hopes that a prototype could be developed from his calculations
within 10 years. Photographic Mix & Match Even
with digital photography, we still take plenty of snapshots that “would
have been perfect, if only . . .” If only that building wasn’t in the
way of the view. If only that obnoxious tourist wasn’t posing next to
that statue. If only there wasn’t scaffolding covering the face of that
beautiful cathedral. What if there was a way to convert those
“if-onlies” into the perfect shots you intended? Using
a “gist descriptor” algorithm, the roof of a building in the original
photos is removed and replaced with sailboats from an entirely
unrelated image using the Scene Completion software. | Two
researchers at Carnegie Mellon have developed a program that will help
do just that. The software, called Scene Completion, uses a novel
approach to image retouching. Unlike systems which attempt to
reconstruct missing or removed image data extrapolating from other
parts of the same photo, Scene Completion uses other photos—sometimes completely unrelated to the subject matter of the photo being retouched—to find an appropriate and believable patch. In
order to accomplish this, the software sorts through millions of photos
(in this case, using over 2 million Flickr photos) as its data set and
applies a “gist descriptor” algorithm (developed at MIT) to find photos
that share similar general properties of the photo needing work:
shapes, colors, geographical features, and textures. From that subset,
the software looks for specific pieces to fill in the needed data,
trying to match the colors and border well enough that with blending of
the edges, the photos look almost as good as the real thing. The
program does have limitations. Some photos are more difficult to match
than others, explains James Hays, the Carnegie Mellon computer science
graduate student who developed the software. “Scenes that are very rare
(an unusual focal length, orientation, camera height, subject matter,
coloring, etc.) have fewer good scene matches. It’s harder to fill
holes in those rare scenes since it’s harder to find similar scenes to
take content from.” The other major issue is the large number
of photos needed to create an accurate subset from which to cull the
most realistic photo patch, and the potential copyright violations if
public photo repositories are used. Hays believes that using only the
75 million photos on Flickr, which are licensed to be modifiable under
Creative Commons, provides a large enough sample size, but the question
of how many of those images are of high enough quality to be useful
remains. There are currently no plans to commercialize the
Scene Completion system, though the team would like to release it to
the public once some speed issues have been addressed, perhaps as an
online version. Currently the database size—half a terabyte of images—prohibits distribution in any other manner. Memory Goes 3D The
next major advance in memory chips could come by working in the third
dimension. Stuart Parkin and his colleagues in the IBM Almaden Research
Center’s (www.almaden .ibm.com)
SpinAps group are working to develop a new type of memory that could
store significantly more data in the same physical space (as well as
access that data at much faster speeds) than current magnetic and
solid-state memory devices. The potentially revolutionary
technology, dubbed “racetrack memory,” utilizes millions of nanowire
loops positioned vertically around the edges of a silicon chip, each of
which could store between 10 and 100 times more data than today’s flash
memory. Electric current moves tiny magnets up and down the wires at
speeds over 100 meters per second, allowing read/write speeds as fast
as a nanosecond. This would overcome a major drawback of flash memory—its slow write speed. The
memory is in the early stages of development: Parkin has yet to build a
prototype, although he has shown that the basic elements of this new
type of memory are possible. Parkin’s team still needs to reduce the
amount of current necessary to move the information along the nanowire
and research the possible interference between the closely-spaced
nanowires to determine how densely they can be packed. If Parkin is
able to solve these problems, it’s possible that racetrack memory could
make its move on the market in three to five years.
|