What would/does Paul Virilio say about this?

Giant Poster shows Drone Pilots who they’re bombing

also, this kinda sounds like this could be a Tom’s shoes story: Happy kids one day, dangerous people taking the property when the foreigners leave. Not saying that’s happening, but it could. 

What would/does Paul Virilio say about this?

Giant Poster shows Drone Pilots who they’re bombing

also, this kinda sounds like this could be a Tom’s shoes story: Happy kids one day, dangerous people taking the property when the foreigners leave. Not saying that’s happening, but it could. 

dusdin:

Today I’m starting a new series of inexpensive monthly print sales.  
Every month I will be offering new limited editions of signed and numbered archival pigment prints.
One of these prints will be made from whichever image posted to tumblr the previous month is most popular, based on the number of notes it receives.  The others will be taken from my archive. 
For the month of April 2014, I have:
A portrait of Sharon Van Etten, 2014.
A black and white figure, 2011.
Praktika in Moscow, 2008.
I’ll only be able to make these prints at such a low price if I receive at least 10 orders per image, so please, reblog and encourage your friends to come and check out my print sale!

dusdin:

Today I’m starting a new series of inexpensive monthly print sales.  

Every month I will be offering new limited editions of signed and numbered archival pigment prints.

One of these prints will be made from whichever image posted to tumblr the previous month is most popular, based on the number of notes it receives.  The others will be taken from my archive. 

For the month of April 2014, I have:

I’ll only be able to make these prints at such a low price if I receive at least 10 orders per image, so please, reblog and encourage your friends to come and check out my print sale!

dusdin:

HEY EVERYONE, I’m relaunching my website today with a lot more content, including a print sale that I’ll be announcing in a couple hours.
Go have a look!

dusdin:

HEY EVERYONE, I’m relaunching my website today with a lot more content, including a print sale that I’ll be announcing in a couple hours.

Go have a look!

"Jaden Smith: Words of Wisdom"

"Movie Title [Talk]"

Familiar work. Some of these really work. Others desperately don’t. It’s hard to tell when you’re writing it out what will work, right?

new-aesthetic:

World’s First Real-Time Google Glass Facial Recognition App Demo - November 26 2013 (by FacialNetwork)

Inspirational music because “privacy.”

"For over 30% of New Yorkers, over half of their paycheck goes towards rent, according to the Furman Center."
Crazy stat.
(from “The Truly Affordable New York Apartment" video essay on NYTimes.com"

"For over 30% of New Yorkers, over half of their paycheck goes towards rent, according to the Furman Center."

Crazy stat.

(from “The Truly Affordable New York Apartment" video essay on NYTimes.com"

Here’s a long-ish essay I recently wrote about future interfaces. I feel…eh, ok about it. I don’t think I’m totally hitting the nail on the head, but I think I’m finding good examples and definitely floating around the right thought. Any suggestions or feedback? 

new-attitude:

It makes me excited/uncomfortable that this thesis presentation from ITP is probably identical to what I would have produced had I gone, although I probably would have approached the question with slightly stronger Liberal Art bent and slightly weaker technical implementation.

In his presentation, Robbie goes through a series of new interaction methods, including using the Leap Motion to mockup the interaction from the Iron Man movies (skipping Minority Report’s similar interface), building a sort of Google Glass-like HUD prototype with a depth sensor, and creating a Leap-controlled holographic reproduction of a globe, which kinda reminds me of holograms that I saw at the Oregon Museum of Science and Industry when I was a child). The number of experiments he builds and conducts itself is impressive, and you really get a sense that he’s worked to become very familiar with the “future interfaces” space.

Answering the question from Tom Igoe “which of these do you find most comfortable as part of your everyday life?” Robbie responds that the Google Glass interface seemed to be best to him. (It should be noted, his addition of a “take a picture” hand movement to make his depth-camera powered HUD record an image is a great idea.)

I got a sense from Robbie that he really enjoyed this work, put a lot into it and got a lot out…but that he was a little disappointed with the results, a little less optimistic of the impact of these new interaction methods. Or maybe I’m reading into it—in the past year, I’ve become a little less excited by these novel interaction tools myself.

The Leap, Oculus Rift, Kinect (and other PrimeSense/Apple devices), Google Glass, Myo…it’s still not entirely clear how these products will get good enough to outshine other standard interfaces*, including input devices like mice, trackpads, and keyboard, and output methods like flat displays, phone vibrations, and dual i/o things like touchscreens and voice-activated services/products like Google Now and Siri. It’s not clear that gestural interfaces will ever enable the degree of precision offered by more typical inputs, not just because the “technology isn’t there yet,” but because tools like mice allow us to anchor our arms to the desk, decouple commands (clicks) from movement(arm motion), and provide clear tactile feedback. (Edit: I should note, I understand the Oculus will probably hugely impact gaming. Some think it will change film-making, but I think that idea will go more the way of 3D television, at least in the short term. In this essay, I’m thinking about the use of these interaction tools outside of entertainment.)

When these new interaction tools do outshine standard interfaces, when companies like Oblong Industries make products for the government, the level of education required to use them reduces the pressure to innovate on interaction quality. Or their products are made for the business sector—“like magic,” but for particular uses in a particular setting for a high price.  Also, these input devices/methods seems best suited for the collaboratively manipulating visualizations of large data sets rather than helping me make my coffee in the morning or contact a friend.

But going back to our initial devices—If these are the first steps, if the Rift is the low-fi version of its successor, if the Leap is the prototype of a reliable movement sensor, if the Kinect’s framerate and pixel density are 1/10th of what it will be, where will these interface types take us? In one way, the Google Glass seems the most obvious success because it performs tasks that we already know, give import, and do frequently—checking the weather, taking video calls, text messaging, taking photos. In fact, in the Glass demo video, the Google Glass basically operates like a smartphone on your head. But this is also its downfall—its understood functions already belong to the less wonky smartphone; its “to-be-discovered” features have yet to reveal themselves. We already know the tasks it performs are useful, we know the mechanics of the tasks can be satisfied by this interface, but the potential to be more than a “smartphone on your head” is murky. It’s not clear how the Glass or these other products will help us complete important, frequent activities better than the current options or work themselves into our lives some other way. This isn’t to say that they won’t—personal computers were assumed impossible and unnecessary, smartphones were thought excessive…but still, there is no screaming need to make it obvious how these products will become common.

(Tangential thought—these products, which provide a novel, specific input method, are almost opposite of products coming from the Internet of Things movement, where objects provide output without forcing any input. This isn’t to say they’re direct compliments—most IoT products, including Nest, Hue, and quantified-self products like the Nike Fuel and Fitbit run “in the background.” They pride themselves on requiring the smallest amount of input possible. That said, their pairing at times seems the most futuristic. Imagine this Kinect-powered lighting setup paired with Hue lights!) 

One product that I think could make a real impact, is Meta’s “Space Glasses" I still have yet to see a convincing proof of concept for their device, but a true augmented reality interface could offer passive data presentation while taking advantage of a gesture vocabulary. The benefits seem real to me, even if their current teaser video avoids realistic use cases. While the device lacks the haptic feedback required to successfully simulate the act of sculpting or designing a 3D product with your hands, the device can extend one’s work space, offering passive data about current situations, situate information using 3D modeling, and these features could all be added on top of Google Glass’s capabilities. Granted, the glasses from META are not being built as a mobile device and require an additional computer to power itself, but the lighter/smaller/faster/cheaper of Moore’s law could help bridge this gap, even if Moore’s Law peters out soon. 

Smart watches are another product type on the horizon, but I can’t see them having significant impact outside of a pre-existing technology infrastructure. Apple may be the ones to make smart watches a desirable object by incorporating them into an existing Apple-powered product system, but I can’t see this development as more than moving the smartphone to the user’s wrist rather than the head (like Glass). I bet in a couple years I’ll regret saying this though, as I haven’t spent as much time thinking about it. (Edit: Tog writes about the future of iWatches. great read.)

All of this said, Chris Dixon recently reminded me of Amara’s Law—that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” I want to take a stab at why that is the case with these technologies.

The main reason it’s difficult to grasp the potential of recently-introduced input technologies like the Leap and the Kinect is that we are primarily viewing them outside of their future, integrated contexts, which may include multiple products and multiple instances of these technologies. (Edit: I’m not so sure anymore. Maybe, like with the 3D TV, it’s more to do with the ecosystem around these tools…back to “on the fence” about this.)

First, depth sensing and gesture recognition are features, not products. As opposed to products themselves (software suites, for example), it’s not entirely clear what to do with them outside their current confines. The majority of Kinect end-users are buying them as part of the X-Box package—only the experimenters and tinkerers are buying them stand alone. We’re beginning to see further integration, as the Leap is already being integrated into HP computers, but the add-on value is still vague, especially given that existing models of computing are designed around more standard input methods. But these two are still the most obvious. Can you imagine the Oculus Rift as a productivity tool? What will MYO’s control in the future? How will Apple’s iWatch work with iBeacons and IoT products to augment your everyday experience? What will it look like all together?

There are a few examples. “Tech of the future” videos tend to offer a reflection of how companies imagine themselves in the future, rather than how the future will really appear. That said, they offer a view on integrated technology environments and explore ways in which the technology works together to accomplish users’ goals. For a slightly different approach, the dark, fictional video “Sight" provides a less sterile version of these presentations. 

Another example, Oliver Kreylos has built software that uses the Kinect, Razor Hydra controllers (similar to the Wii controllers), and the Oculus Rift. The demo video presents the technology in a way similar to early Oblong and Sixth Sense demonstrations, but, like Oblong’s current enterprise system Mezzanine, Kreylos uses controllers that offer a larger number of input methods (“buttons”) without reducing precision the way hand gestures do. Kreylos’s example is still primarily a visual data manipulation tool, but it’s a “personal” enough solution that you can actually imagine someone using it at their desk. 

Steven Sinofski at learningbyshipping.com thinks that 2014 will be the “culmination of the past 15 years of development of the consumer internet." He’s focusing, however, on more standard devices (including "phablets"), storage methods (the cloud) and consumer behavior. I think the novel interaction technologies I’ve discussed still have 3 or 4 years before we begin to see them in full bloom.

I’d like to spend more time in this space—the “nearish future of the recently possible.” What I suggested as an aside earlier, the movement toward an Internet of Things, will, I think play a major role in determining the possibilities of new interaction technology. There will be advances in both the consumer and enterprise world as we find ways that novel interactions will help us better understand and manipulate information, add intuitive methods of navigating data, simplify and extend actions. Make things easier.  

* That said, the nuance of gesture has already enabled people to do more than they can with a mouse and keyboard in specific situations. The kinectar is a great example.

new-aesthetic:

Twitter / alexlundry: ”Greatest paragraph in an academic paper ever?”

new-aesthetic:

Twitter / alexlundry: ”Greatest paragraph in an academic paper ever?”

Jonze had help in finding the contours of this slight future, including conversations with designers from New York-based studio Sagmeister & Walsh and an early meeting with Elizabeth Diller and Ricardo Scofidio, principals at architecture firm DS+R. -

Little things like this excite me a bunch. Imagine being in that meeting. Good stuff.

(via ‘Why Her will dominate UI design more than Minority Report' on Wired.com)

Touch is not intuitive, it is not natural. -

Shmuel Eden, senior vice president and general manager of Intel’s perceptual computing.

The article continues: “He said the company wanted to make it possible for people to communicate with computers more like we do with one another — using eyesight and speech.”

This quote sounds weird to me. I don’t think it’s simply the current paradigm that makes touch intuitive or natural. In communication with other people, touch is not necessary. But when we use tools, touch is entirely intuitive and natural. Maybe Eden doesn’t want to see computers as tools? Maybe he’s envisioning a future where we communicate with computers rather than use them as tools. That’s a big thought.

(via)