A conversation I had recently got me to thinking about this and my brain wasn’t going to let me stop until I did the math for myself. Exactly how much money are you wasting on bottled water? First let’s talk about bottled water… If you shop at a warehouse club, you can get 40 17oz [...]Read More
A conversation I had recently got me to thinking about this and my brain wasn’t going to let me stop until I did the math for myself. Exactly how much money are you wasting on bottled water?
First let’s talk about bottled water…
If you shop at a warehouse club, you can get 40 17oz bottles of water for $3. That’s less than a dime per bottle, $0.075 to be exact. Let’s assume that your household drinks 3 bottles per day on average. That’s 145.43 gallons per year costing a total of $82.12. And you’re putting 1195 plastic bottles into your waste stream, hopefully they’re being recycled.
And now another approach…
First there’s the water itself. The price of tap water will vary by city, here in Tempe I pay $0.18 per 100 gallons. But let’s say $0.20 to keep the math easy. 145.43 gallons of tap water costs $0.29.
But you may not be comfortable drinking what comes out of your tap as is, I’m not. So you need some sort of filtration system. The pitcher style is probably the most popular, so we’ll focus on that. The pitcher itself could set you back as little as $10 or more than $30 depending on brand an capacity. My pitcher is 5 years old and still going strong, but let’s assume you need to buy a new $20 pitcher every five years. That’s $4 per year.
Generic replacement filters are $3.67 each and need to be replaced every 40 gallons. So to produce 145.43 gallons of filtered water, you will use 3.64 filters which is $13.34.
Add all that up and you end up at $17.63 per year. That’s 79% savings over store bought bottled water. Plus you’ve removed 1195 plastic bottles from your waste stream. And if your household drinks more than 3 bottles a day your savings would be even bigger.
[G+] SmartThings – Week 1
I’ve had my #SmartThings hub for almost a week now and figured this was a good time to share some first impressions.
Overall it’s a fun system with lots of potential. Unfortunately it’s cumbersome and awkward in practice. It needs to be radically simplified if it’s ever going to be accessible to users without at least some technical knowhow and quite a bit of patience.
Setup is easy enough. Connect an ethernet cable to the hub and plug it in. Then launch the app on your phone or tablet, create an account, and enter the “welcome code” that came with your hub. The problem I ran into here is that when I did this the app became unresponsive for what seemed like close to a minute before I was presented with a screen telling me that my hub was updating and the process could take up to ten minutes. That’s all fair enough, but it seems like the messaging could be better and more timely. Offer me something to do like read a tutorial while the hub is updating instead of just a screen with a progress indicator.
The first device I added was an Aeon Labs Multi-sensor; motion, brightness, temperature, humidity. I plugged it in, pressed the big plus icon on the app’s main screen and it immediately discovered a new device. It presented me with a few options for what it thought this device was, I chose the specific one that I had, and it was added to my device list. That’s good, that’s how it should work.
The next thing I tried to add was a Cree Connected Dimmable LED light bulb. This did not go so well. It took several tries for SmartThings to find the new device and when it did it was labeled “Unknown”. This despite the Zigbee logo on the Cree package and the claim that it would work with Zigbee certified hubs. Luckily I had done my homework and I was not all that surprised. I knew ahead of time that I would have to get my hands dirty and add my own custom Device Type to my SmartThings account for this particular bulb. That involves setting up a developer account and copy/pasting some source code. That’s not a big deal for me, in fact that’s part of the fun I was looking forward to having with the system. But considering that this is a device that can be had for $15 at Home Depot and that has the Zigbee logo right on the box, it seems like a really bad thing that it doesn’t just work. Most people aren’t going to go through the hassle of becoming a developer just to get a light bulb to work. They’re going to return it as soon as they get as far as “Unknown”. That’s unfortunate because once you get it working, it’s a really nice light bulb.
The Android App
The Android app has been the weakest part of the SmartThings experience for me. It’s slow and awkwardly organized. The main screen displays a list of sections, but it’s not obvious how those sections are defined. “Home & Family” is simple enough, it’s a list of presence sensors associated with your account. “Things” is also self-explanatory, a list of all the devices linked to your hub, including all the presence sensors that are also listed under “Home & Family”. From there it starts to get confusing. “Lights & Switches” is devoted to an appropriate subset of the devices linked to your hub. That seems somewhat redundant since all those same things are listed under “Things”. I don’t understand the purpose of that section.
Then there are a series of sections that represent the different categories that your various SmartApps belong to. For me that’s “SmartThings Internal” for the IFTTT integration and “My Apps” for the stuff I’ve built myself. These sections also seem at least partly redundant since all the same apps exist in the UX alongside the devices they are linked to within the “Things” section. What’s even more confusing is that not all of the controls for your devices end up being listed under these sections. As far as I can tell, there are entities (I’m not sure what to call them) that can control devices, but are not SmartApps. For instance, if I navigate into “Things”, choose one of my light bulbs, and bring up the list of “SmartApps” that can control it I often see things in that list that don’t exist in any of the SmartApp sections available on my main screen. I think that’s because these things are some kind of default, built-in, basic control entities that aren’t considered SmartApps even though they appear in a list that is displayed after you tap a button labeled “SmartApps”. It could also be that the things listed in the “Lights & Switches” section on the main screen are actually SmartApps and not devices. Confusing either way.
The frustration caused by the awkward layout of the app is only made worse by the fact that it’s driven by what feels like a very clunky API. I have a pretty basic setup with 2 bulbs, 2 switches, and 1 multi-sensor. Despite that, it seems to take several seconds for the app to get the data for any given screen. And the service is just plain down quite frequently which results in the app displaying big red dialogs filled with error messages. I’ve had similar problems while working in the web based IDE that developers use to build their own SmartApps and Device Types. It’s aggravatingly sluggish at times.
I’m also not the biggest fan of the visual design. It’s not very Android-like in my opinion. It alludes to Material Design in some ways, but I think it should go a lot farther. Flatten the interface and get rid of large dead spaces like the big header image on the main screen.
And there’s no widget! I’m not a big user of Android home screen widgets, but if ever there was an app that needs one, this is it. It could be something as simple as a list of your switchable devices along with their current state, tap one to toggle it on/off. Or it could be much more cool and expose sensor data without the need to launch the app and drill down into an individual device. A home screen widget that showed me the temperature reading from one of my sensors (or the humidity level inside my humidor) at a glance would be super useful.
But enough negativity. One of the coolest thing about SmartThings, for me at least, is that you can get under the hood and build your own Device Types and SmartApps. As I mentioned earlier, this came in very handy when the system failed to recognize my Cree light bulbs, but you can get a lot more creative with it. For instance, in just a few hours I was able to build a Device Type that let me link my ancient Linksys IP security camera to my SmartThings system. In theory that will enable me to tell the system things like when motion is detected, send a snapshot to my phone, although I haven’t gotten so far as to actually try that yet. On an even more ambitious scale, I have it on my to do list to see if I can find a way to link some of the sensors in my old decommissioned Galaxy Nexus with SmartThings.
SmartApps are basically event handlers for your devices. At time A, turn off device B. When device X detects movement, turn on device Y. Those are simple examples, but SmartApps can be much more complex. There’s a good sized library of existing SmartApps, or you can build your own. I’ve opted to build all my own SmartApps because that exploration is part of the reason I got the system, even though I’m sure I’m replicating some existing ones.
The first thing I attempted was a smart dimmer for my Cree bulbs. The basic idea is to use the brightness sensor in the Aeon Multi to drive a calculation that outputs the appropriate value for the dimmer in the light bulbs. So the lights gradually get brighter each evening as it gets darker in the house.
Another idea I’m in the process of fleshing out is using multiple sensor readings to drive the decision making. When motion is detected, turn on the light, but only if it’s dark. Or at midnight, turn the lights off, but only if there hasn’t been any movement for 15 minutes.
Another very cool feature is that SmartThings has an IFTTT channel. I haven’t played around with it much yet, but I have managed to turn lights on and off from my #Moto360, which makes me excited to find out what else is possible.
One week in I’m very much enjoying SmartThings. Would I recommend it to my techno-geeky friends who like to tinker around with new toys? Definitely. Would I recommend it to friends or family who just want to quickly and easily automate some aspect of their home? No.
via [G+] http://ift.tt/1w7vEzb
I was fortunate enough to be able to be at the sound check for Sir Paul McCartney’s show at US Airways Center. Here are a few snaps from the event.
[G+] Sound check #macca
via [G+] http://ift.tt/1oK7ZPh
[G+] I’ve spent the better part of a week with my new #Nexus5 and I’m very much impressed with what I’ve seen so far.
I opted for the red version (I only wish they had a green one). In fact, if it hadn’t been for the red version I’d probably still be sporting my #GNex as a daily driver. They aren’t lying when they call it bright red. It’s actually closer to the color of a hunting vest or the safety gear that road workers wear than it is to anything I’d call red. You definitely see this thing coming. It’s a bit overwhelming from the back, but looks perfect from the front with the red side bezel outlining the glass. By contrast, the white Nexus 5 has a black bezel, so you don’t get the outline effect. The matching speaker grill is a nice touch although it takes some getting used to. For the first few days, I frequently mistook that speaker grill for a notification light.
The very first thing I did was activate developer mode and switch from Dalvik to ART. I did the same thing on my #Nexus7 not long ago and the improvement in battery life was so dramatic that it was a no-brainer to do the same on the 5.
Speaking of battery life, I’ve been very impressed with the Nexus 5 in that respect. It may not be the best there is, but compared to my GNex it’s amazing. With the GNex I had to use radio managers and other tricks to get the battery to last through a typical day. And a typical day for me is a lot less taxing on a smartphone than I think it is for most people. With the WiFi radio always on and no special tweaks, the Nexus 5 has no problem making it through the day and usually has at least a third of a charge left when I set it down for the night.
Performance and responsiveness are like day and night. That’s to be expected when making the jump from a phone like the GNex where things would start to bog down any time I launched Play Music. But even compared to the Nexus 7 (2013), the Nexus 5 feels quite a bit smoother.
A lot of people seem to think the camera is the Nexus 5′s weak spot. I’m barely a casual photographer, so this wasn’t a big deal to me. All I use the camera for is to snap quick photos for sharing with friends and family or maybe to take a bit of video at holidays. Even so, I find the 8MP rear facing camera to be more than adequate. Definitely better than the camera on the Nexus 7.
I’m trying to keep things as stripped down as possible for as long as I can. Resisting the urge to install Tasker and start fiddling too much. I did install Trigger (http://ift.tt/GV0mvj) so that I can use NFC tags to toggle radios and have the phone automatically mute itself when I’m in meetings.
I also installed Snapdragon Batteryguru (http://ift.tt/VwkP4U) after hearing it discussed on a recent episode of AAA. Basically, it tries to eek out a bit of extra battery life by learning how you use the phone and automatically adjusting how often different apps are allowed to sync data in the background. I’m skeptical, but I gave it the benefit of the doubt since it comes from the actual company that made the processor in the phone. Unless it manages to impress, it will be banished before long.
More to come.
via [G+] http://ift.tt/1mNMgTe
[G+] Contextual Homescreens with #Tasker
A few of you have requested that I share more of a step-by-step guide for using Tasker to build a set of contextual homescreens in Android similar to what I described in this post (http://ift.tt/1i3sYYN).
First of all, be sure you’re using a launcher that supports Tasker’s “Go Home” action. I use Nova Launcher. If you use some other launcher, your mileage may vary.
You may also have to experiment a bit to figure out which page numbers in Tasker correspond to which home screens in your launcher. In the case of Nova Launcher, the first home screen is index zero, but Tasker only sends the request when the page number is greater than zero. That means that I cannot use Tasker to display the first home screen in my launcher.
One of the easiest contexts to handle is probably screen orientation, so let’s focus on that.
The first thing to do is create two home screens in your launcher of choice, one for portrait and one for landscape. Keep in mind that if you’re using Nova Launcher, neither of these can be your first home screen. For purposes of this post, we will assume that we want to display Nova Launcher’s second home screen (index 1) in portrait and the third home screen (index 2) in landscape.
Once you have your two home screens ready to go and know what their indices are, it’s time to fire up Tasker…
- Create a new profile based on State->Display->Display Orientation. The default trigger is “Is Portrait” and that’s fine.
- After you create the profile, Tasker will prompt you to select an entry task for it. Choose “New Task”.
- In the task editor, add an action App->Go Home. The new action will default to Page 0, change that to Page 1.
- Back out to Tasker’s profile list and long-tap your newly created entry task. You should see a contextual menu. Choose “Add Exit Task”.
- You will be prompted to select a task. Choose “New Task”.
- In the task editor, add an action App->Go Home. For this task you want to change the value to Page 2.
That’s it. Exit Tasker and start flipping your device around to see if it worked.
This should be considered a starting point and something to build on. This basic setup has some pitfalls that you’ll find quickly. For instance, if you change screen orientation while inside an app, Tasker will close the app and take you to the appropriate homescreen. That’s probably not a desired behavior. Tasker doesn’t have a built in concept of “is there an app active right now?”, but you can still work around the situation easily. What I did was add an Application based profile and use it to set a variable that keeps track of whether or not the launcher is running. Then I added a condition to both my orientation tasks so that they only change home screens if that variable has a certain value indicating that the launcher is active.
via [G+] http://ift.tt/1f2bWZN
A few weeks ago I tried Aviate and although I liked the concept, I decided it wasn’t for me. I wanted more control than it gave me over organizing my life into a group of contexts. And thanks to the experience of trying Aviate, I realized that I could do something similar using Tasker.
The basic idea was to create a different home screen in my launcher (I use Nova Launcher) for each context I wanted and then use Tasker’s “Go Home” action to automatically present the appropriate one when I unlocked my phone or tablet (#Gnex or #Nexus7).
Aviate uses time of day and location to trigger different contexts: Morning, Night, Home, Work, and Going Somewhere. But I don’t use either my phone or tablet in a way that meshes with that structure. And since the whole point of Aviate and context awareness is to get structure out of the way, it kind of defeats the purpose.
I started with my phone because I use it more contextually than I do the tablet, or so I thought. And keeping it simple I just tried implementing “Home” and “Not Home” which are really the only two contexts I use my phone in. You could use location to toggle between these, just like Aviate does. Or with Tasker you could also toggle them based on whether a certain wifi access point is in range. In my case, I already had an NFC tag by the kitchen door that I used with Tasker to toggle radios on and off when I come and go, so I just added this new context awareness to that mechanism. This approach has the added benefit of saving a little battery because it doesn’t rely on a location check, although you do have to keep your NFC radio on.
With the phone working so well, I started thinking about context on my tablet. “Home” and “Not Home” didn’t make sense there, but time of day did. Email and Evernote in the morning where daily status reports on my Roku channels are waiting for me, News in the afternoon, social in the evening. And I could control the exact times at which each context became active. That’s all pretty simple to make happen with Tasker and I can even set a different background image for each context.
Once I had that working, I started thinking about things that might be used to trigger a context other than time and location. This lead me to realize that I use my tablet a lot more contextually than I realized and the contextual awareness I’ve ended up building for the tablet is more complex than the phone’s.
Light level… If I take my tablet outside, it almost always means I’m going to read. So when a very bright environment is detected, show me a home screen with my reading apps and widgets.
Screen orientation. The only time i ever turn my tablet to landscape is when I’m going to watch video, so let’s detect that automatically and present a home screen with Netflix and Beyond Pod and my other video apps.
Other than NFC tags, I haven’t found an automatic way to detect different rooms within the house. In the living room, show me entertainment stuff; in the kitchen, show me recipes; in the bathroom, show me reading material; etc.
What other events/environments might be used to trigger a context?
via [G+] https://plus.google.com/108736442397346150027/posts/EwdKChZMwnR
[G+] Aviating with Tasker
I’ve been playing with #Aviate the last several days and it’s not bad. I installed it on both my Galaxy Nexus and my Nexus 7 even though the FAQ is very clear that it is not optimized for tablets. I’ll do my best to limit my comments to the experience I’ve had on my phone.
I really like the idea of making our devices more contextual and Aviate in its current state is a good first swing at bringing that idea to an Android launcher. The problem is that not everyone breaks their lives into the same set of contexts. Aviate has environments for “Home”, “Work”, “Morning”, “Night”, and “Going Somewhere” and tries to switch between them automatically based on time of day and your location. For most people that may be fine, but it doesn’t jive too well with how I structure my life.
Aviate apparently uses location to decide whether you’re at home or at work. But I work from home, so I have to manually tell Aviate when I’m “at work”. “Morning” and “Night” are time based, but I haven’t found a way to change the times they (de)activate. For some people, “Morning” may be from 6 to 8 when they wake up until they leave for work. For me, “Morning” is from 8 to 9 when I wake up until I sit down at my desk. So I find myself doing a lot of manual switching between contexts, but the time based contexts are not always available. I can manually switch between the location based contexts (“Home”, “Work”, and “Going Somewhere”) at will, but “Morning” and “Night” are only available during their preset times.
Aviate has a minimal, flat look to it. I like that, but they removed so much clutter that it started to impact functionality. You can’t set wallpaper on your home screens. Instead, Aviate gives you a photo widget that you can use to display a picture on your home screen. The problem is that if you choose to set a photo, you can’t do anything else with that space. Nothing is allowed to appear above that photo.
Widgets also don’t make very good use of space in many cases. When you add a widget to your Aviate home screen, it uses the full width of your screen regardless of how wide the widget was designed to be. That can waste a lot of horizontal space depending on which widgets you use. I’ve also run into a few vertical spacing issues with widgets appearing taller than they should. Aviate is a young product and these are things that will get worked out in time, so I can’t ding them too hard for this.
I’d like to see battery level added to the header area alongside the date and time. I’m one of those people for whom the little battery meter in the notification bar is not enough and I find myself frequently pulling down the settings panel to see the actual charge percentage.
Despite the minor flaws of a young product, Aviate is a useful tool and I have found it streamlining my workflow in a number of scenarios since I started using it. But the control freak in me wants more.
As I was ruminating on the idea of Aviate this weekend and some of the ways that I wished it could be made to integrate better into the way I contextualize my life, I had a bit of a brainstorm. Could I build a better (for me) Aviate using Tasker?
I haven’t worked through the implementation fully, but I have gotten far enough to be reasonably confident that it can work. Aviate basically presents you with different home screens for different times, locations, and activities. That idea is actually very simple to replicate using Tasker. I use Nova Launcher, but this should work with the stock Android launcher as well. All I did was setup a different home screen in Nova Launcher for each of the contexts I wanted to work in. Then, using Takser’s “Go Home” action, I can automatically switch to the appropriate context based on time or location just like Aviate does. And with Tasker I can also switch contexts based on a host of other criteria: NFC tags, screen orientation, incoming notifications, light level, whether the device is docked or charging. You could really get creative.
via [G+] https://plus.google.com/108736442397346150027/posts/bpgffbjXXcE
[G+] iOS 7: First Impressions
For the most part, I find #iOS7 to be pretty ugly. An overcorrection from too much skeuomorphism to too few effects of any kind. Gradients, shadows, textures, reflections: gone. The grey background on folders and all the blue stick icons are especially unappealing to me.
That said, I very much like the new lock screen and Notification Center. At first I wasn’t sure how I felt about it filling the whole screen when you pulled it down, but after a few minutes I was sold. Even so, we still need rich/actionable notifications. The fact I can archive an email on Android right from the notification without launching the email client is one of the things that makes me pick up the #Nexus7 instead of the #iPad .
I also like the new multitasking app switcher UX. I don’t know if there’s an official name for that, but it’s much nicer than the old way and I think it’s better than the Android way.
I’m indifferent to Control Center. I think I would find it more useful on a phone than a tablet.
I don’t like the way the wallpaper moves beneath the icons. It’s a cool idea, but in practice it’s way too twitchy and draws my eyes away from where I want them to go.
iTunes Radio is neat enough, but not much more than another radio service. It’s not going to make me cancel my Google Music All Access subscription any time soon.
Overall performance is very good. Even on my old iPad 2 Safari is noticeably faster than in iOS 6.
via [G+] https://plus.google.com/108736442397346150027/posts/ePRMZ9h5kcV
I’ve had a whole weekend to play with the new Nexus 7 and like the original, it’s a great device at a very good price. Is it perfect? No. Are there things I wish they had done differently? Of course. Even so, it’s the best small tablet out there.
I did not have the wow reaction to the screen that I was expecting to have and I’m not sure why that is. The 323ppi IPS display is probably the most touted feature of this device. It’s a big improvement over the original Nexus 7 and a huge improvement over the iPad 2 I’ve been using as my daily driver. All I can figure is that the Super AMOLED screen on my Galaxy Nexus with its 316 ppi caused me to not be as overwhelmed as I should have been.
Many have made this complaint and I’ll pile on. The top and bottom bezels are too big and the side bezels are too small. Holding the device in portrait mode, it’s too easy to accidentally tap or swipe with the thumb of the hand holding the device.
A lot of reviews I’ve read have lamented the decision to get rid of the textured, faux-leather backing of the original Nexus 7. Not me. I actually think I prefer the new soft-touch plastic backside.
Having two speakers is nice, but all the hype about “surround sound” is just silly to me. You can’t have surround sound with two speakers that close together. And speakers that small are never going to produce great audio. For a tablet, the audio is very good and the addition of a second speaker is more than welcome, but let’s not kid ourselves.
The Rear Facing Camera
The original Nexus 7 had a single, front facing camera. It was great for video calls and that was what I used it for mostly. I didn’t really miss a rear facing camera. But a year later, Vine has an Android app and it is more than a little sluggish on my Galaxy Nexus, so I have a feeling I’ll be getting a lot of use out of the rear facing camera on this new Nexus 7. 5 megapixels is nothing to get too excited about, but it’s adequate and should be at least as good if not better than the camera in my Galaxy Nexus. And they’re both better than the camera in the iPad 2.
The new Nexus 7 includes a notification LED, something that was absent from the original. This is both a welcome addition and a big annoyance. I’ve come to rely heavily on my phone’s notification LED. Using Light Flow on my Galaxy Nexus with its RGB LED, I can have different colors and flash rates for different types of notifications. It’s great for knowing at a glance what’s waiting for me. So I was more than a little disappointed when I found that the new Nexus 7 appears to have a white-only LED. I can still control the flash rate, but not the color. That’s a huge bummer, especially in a Nexus device that Google claims is supposed to be sort of a reference device for showing off what Android can do.
I’ve been running Jelly Bean 4.2 on my Galaxy Nexus for a while now. I’m not a gamer and I don’t have a need for restricted profiles, so 4.3 doesn’t bring a whole lot to the table for me other than the performance improvements like TRIM support. My hope is that when the Sprint Galaxy Nexus gets 4.3 I’ll get a lot more benefit than on a brand new tablet.
A Strong Sequel
This little guy is a great followup to a great original. It has its shortcomings, but if you’re looking for a small tablet at a good price I don’t think you’ll do better than this.
A frustrated Nowhere TV user said something the other day that I’ve been thinking about more than I should and I wanted to take a minute to try and explain how Nowhere TV is different from most channels on Roku and why it is a private channel (not in the Channel Store) and why I continue to maintain that Nowhere TV is an experiment, a crowd-sourced sandbox where users and I can explore together what the platform can do.
This user was upset because a certain piece of content was not playing well on their Roku. This is not at all uncommon. On any given day there may be any number of content sources that break for any number of reasons. I do what I can to fix them, but in many cases the root cause of the problems are beyond my control. A server may go down. An RSS feed my be malformed or not updated regularly. A video may be encoded in a way that makes it not play correctly on Roku. These are things that I can’t do anything about other than wait and hope. And more often than not the problems are corrected by the publishers within a few days.
When I tried to explain to my frustrated user that the particular problem they were having was caused by the way the content was encoded by the publisher (not me) they got even more upset and said something to the effect of “If I go to a restaurant and order a burger and the waiter brings it to me burnt I don’t expect the waiter to tell me that the problem was caused by the chef and he can’t do anything about it.”
I understand the metaphor and I appreciate the point this person was trying to make, but it also demonstrates their misunderstanding about how Nowhere TV works. Nowhere TV is a content aggregator, which in simplistic terms means that it links to content from lots and lots of different sources and that I as the maintainer of the channel have no control over those content sources.
So what’s wrong with the hamburger metaphor? Let’s start with the waiter and the chef, they both work for the same restaurant and that restaurant was responsible for cooking the burnt hamburger. That’s different from Nowhere TV in that I (the waiter) and the content publishers (the chefs) do not work for the same entity. The content in Nowhere TV is encoded (cooked) by hundreds of different entities and Nowhere TV simply curates it all and presents it to users in a somewhat unified, coherent interface.
Furthermore, the hamburger in this metaphor is an entire product being sold to the customer. One piece of content out of the hundreds (thousands?) in Nowhere TV not working for a day or two would be more like the lettuce on the cheeseburger being a bit limp.
Oh, and the hamburger would have to be free for this metaphor to work. Nowhere TV doesn’t cost users a dime.
All this talk about hamburgers is making me hungry.
A more apt analogy for my frustrated user might be one in which they buy an iPod from Target and after some period of time it stops working. They cannot then go back to Target (the waiter) and demand that they repair the broken iPod, they have to go to Apple (the chef) for that. And again Nowhere TV is free, so It would be more like Target giving someone an iPod free of charge, the iPod breaking, and that person demanding that Target repair it.
An even better comparison, and one much more closely related to what’s really going on here, would be a web browser. If one specific web page is not displaying correctly in my web browser, I don’t complain to the developers of the browser, I complain to the authors of the web page. Nowhere TV is not unlike a web browser in that respect. It renders content authored by other people. If those other people don’t author the content well, Nowhere TV won’t be able to render it well.