you get over 24 hours of use from a full charge. - too many times I have heard this phrase from other smartphone manufacturers and its never true. Since this phone is made by Motorola(which I think is a great company that builds good products)there is hope but that screen is going to be a battery drainer. Motorola had their  Motorola Droid Maxx which held a 3,500mAh battery and its at least kind of true for that statement above.
If they had put the 3220 mAh battery (or larger) in a 4.7" - 4.9" phone, I would gladly pay for that. Why can't smartphone manufacturers understand that a longer battery life is whats lacking in mobile devices?
All the goodie features like Google Now and other location hungry services completely drain your battery in a short time. All I want is a smartphone that can last for at least one day on one charge.
Lastly the price. The nexus line is known for the competitive price/performance being greatly competitive. If this phone asks for more than 350, does it really have the nexus characteristics anymore?
I hope there will be android phones still produced with 5" or less screen size in the next 4 years. A significant portion of the population don't have unimaginably big hands (or pockets) to carry these so called "mobile" phones.
I mean, does anyone actually notice the difference? Because if that difference means I'm paying $100-200 extra for a better screen, a better graphics chip etc, and it means my battery life is reduced, I really just don't get it.
I elaborated a bit on this question here: https://news.ycombinator.com/item?id=8459761
Anyway... my biggest gripe is really price point. The Nexus 4, 5, 7 weren't any more interesting than devices coming out from Samsung or HTC. What made them unique was a sick performance/price ratio, the best mid-level entry device range for people who aren't interested in $500 a year fees for their phone+plan, yet like running the latest android on a nice device.
Now we seem to get another high-range flagship-type phone and tablet. I kinda get it, you want a benchmark device and you've got moto filling the lower-mid range quite nicely, but I really wish they'd have kept the Nexus series below the $500 range.
Love the thickness by the way. 10mm or so is great. Not a thick slab, but it's got some real grip to it. iPhone 6's thickness really sucks for me. (although it's a slightly different story when you add a case, I did like the thickness of the iphone 6 with a case when I tried it)
No wireless charging either? Not a dealbreaker, but I did use it on my 5.
Not that it's particularly important, but the Nexus 6 was the model of the androids in Blade Runner, but I'm not sure if this phone is "more human than human." :)
* Size: 4.9" screen
* Resolution: 1080p (it's really hard to find any phones this size with this resolution, which is disappointing because the PPI is possible, especially given the quad HD resolutions being slapped on phones now).
* SD expansion slot: One thing I really liked about the Note 2 that the Nexus phones don't have. I could upgrade with a 64G SD card which didn't cost much. 32G can fill up pretty quickly with videos and photos and it's annoying Google has a philosophy which shuns SD cards.
* Battery: At least 2600mAmp (should last at least one day).
* Stock android: No bloatware and no touch-wiz. This isn't as important as the other considerations though.
* CPU: This doesn't matter too much to me. 99% of what I do doesn't need a latest generation processor
* Memory: 2G is fine. Memory again isn't the main thing that's bothering me about the current android offerings.
What really bothers me is no-one is catering to this market segment and the trend is increasingly into the phablet market.
For anyone interested, I would highly recommend the Z3C. It's a fantastic phone and a really great size.
Edit: Disappointed with the pricing too: $649. Google had set a good trend with Nexus series - Awesome devices at exception price. All that's gone. Kind of against the Android One initiative.
If you can, try to use phablet at least for 1-2 days before discarding it.
One thing that might have helped is that I'm also using a smartwatch now and I take out my phone less frequently, but I don't think this change anything much.
Although, it's disappointing that the Nexus 9 does not also include a barometer. I suppose they've decided that the use case for fast GPS and altitude works better in a phone than a tablet - that, or, since the 9 is built by HTC it would be their first time adding one. Oddly, though, Motorola put its first barometer in a tablet (the Xoom) before they tried any phones.
Great! This is why I bought an HTC One M8 instead of a Nexus 5. Great for speakerphone or if you do watch a video on that high resolution display.
It amazes me how the essentials get sacrificed really easily and that consumers often don't demand better when they are purchasing. I would buy a Mac, but I do have to demand a matte screen instead of just getting accustomed to glare.
However it looks like this is heading to $500+ territory so the days of a cheap good Nexus may be coming to an end. Looks like the Moto E etc is filling that niche.
Apple and Ikea will redirect you to a url with country code in it (ikea.com/ja/en/bedroom kind of style) making it easy for people on the tech savvy side to manually change locale. But Google? No way in hell am I allowed to read this in English if I'm not physically in an English speaking country.
I would hope so, call me old-fashioned but I would say that once a day is the absolute limit on how many times I'd accept having to charge my phone.
Of course, you don't use your phone 24 hours in a day (at least I hope you don't) however, marketing being what it is they generally mean "very very light usage" when they quote battery-life.
Nexus 6: 5.96" 25601440 (493 ppi)
The iPhone 6 Plus is comically enormous. This thing is ridiculous. And the same resolution as the current 27" Apple displays.
2.7 GHz quad core vs iPhone's 1.4 GHz dual core (but 32-bit, vs Apple's 64-bit).
That is some serious hardware.
Will handbags or hip bags become common for all? Will the average size of phones rebound and approach some smaller-screen equilibrium size near 5"?
Everything depends on the adoption of these devices by younger people -- whatever becomes "cool".
My prediction is that phones will become strapped to arms or shirts somehow. I see some sort of arm-hoslter or dedicated shirt pocket that comfortably, securely houses a mini-tablet.
Apparently it guesses a language based on your IP address and serves you an inappropriately localized version. Kind of a noob-ish mistake to make for such a campaign. Too bad for you if you're abroad! There's a language drop-down in the "help" section, but it's not stored in the session and it's completely ignored by the other pages. So you simply cannot view the landing page in your own language!
Seems silly to have to VPN into a box in the US just to read a website.
* No 128GB memory option. This is absolutely important for me and it seems Apple is the only one of few company who gets this. I started running out of 64 GB long time ago.
* No TouchID equivalent which is excellent and works flowlessly on iPhones.
* No slow-mo videos at 200+ frames per second.
* From hardware perspective N6 still stuck in 32-bit world. Plus it lacks motion processor that monitors my movements and fitness data all the time without draining batteries.
* 8MP camera is actually better than 12MP and gives better low light performance. Higher MP is actually deal breaking for me (if sensor size remains same).
* At 10mm thickness, that's actually going backwards from 7mm for iPhone 6. Every mm of thickness adds perception of "hugeness" dramatically.
I thought that Google decided not to release a phone named Nexus-6 out of deference to Blade Runner and Phillip K Dick's family who were annoyed that they used the name.
Edit: I guess it's $649.
This is from Motorola http://motorola-blog.blogspot.fr/2014/10/nexus-6-from-google...
Of note: Nano SIM, 493 ppi display, 4K video recording.
(The specs on the Google page have a button to expand them. It's a bit hidden and not linkable, hence the screenshot.)
Given that the battery isn't removable too, this makes for a very shitty development phone. (What do we do if it freezes? How do we hard-remove power?) I think this won't do.
I'm also rather disappointed this isn't the Snapdragon 810 or 808. For a phone this expensive, it doesn't measure up.
Overall this is not what I hoped for from a new Nexus. Perhaps Android Silver will deliver something better, but I doubt the Nexus 6 will be my next phone.
Today, some startup/enterprise somewhere in the valley may be making final touches to a press release on some bad/shocking news such as layoffs...etc tomorrow so that little attention will be paid to that, since most tech press may be glued to Apple's event.
Life repeats on and on as before.
This phone was advertised for beeing as small as a credit card. Now Nexus 6 is as tall as a whole opened wallet.
Having said that, this transition demands that they compete with the iPhones and Samsungs. Taking that into account they should've released an additional 4.5" phone, along with the 6". This could steal away the people who hesitated in getting the 4.7" iPhone 6 due to the size, and if someone is into getting a humongous 5.5 inch iPhone 6 Plus, they'd be even more into getting a 6" device with not much larger physical dimensions. The goal should be find the perfect two offers to cover the whole spectrum of buyers.
In fact, now that I've seen the official N6 specs, I'm going to go ahead and order an N5 to replace my Galaxy Nexus because I need a device that does BLE. In a couple of years, perhaps it will be time to upgrade to a refurb N6 or maybe a less expensive N6.5, if they decide to go back to the loss-leader pricing of the older Nexus models.
I'm disappointed that the N6 doesn't come with memory expansion. A MicroSD slot might have tipped the scales for me.
Eric Schmidt recently said in Germany that Amazon is their biggest competitor. For that to be true, they need to seriously up their support.
Seems kind of big for a phone though. This Nexus 4 that I have is about as large a phone as I'd want.
1) To fit a larger battery, better screen, more horsepower, better camera with OIS they just have to make them bigger.
2) Nexus line has gone out of control and people instead of buying other phones are queuing up for nexuses which initially were meant to be just reference devices for new android oses. Instead they became so popular that had an impact on sales of other Android devices.
3) They got focus groups or questions all wrong.
Sort of like "you can watch this movie as soon as you'll grow up ..."
Until somebody corrects me I'll just assume this thing has the 1GB of RAM which is pretty standard these days.. But not enough IMHO
I'm still looking for a replacement for my Nexus One that matches three criteria: metal case, stock Android, not stupidly large. With the noticeable exception of the tiny amount of app storage the N1 still holds up well for my purposes.
> 13MP front-facing with optical image stabilization> 2MP rear-facing
Is it really 13MP front-facing? Selfie optimization!
- Sincerely, A long time Nexus user !
To me, the follow-up to the Nexus 5 is the Moto X. Decent specs, moderate price, stock Android, etc.
for a "phone" that has 493ppi, you expect the site introducing the phone to be retina ready (couple of images and the favicon)
Display 5.96 1440 x 2560 display (493 ppi)
I do not think, this is how technology should evolve.
I thought that Verizon and Google parted ways permanently for the Nexus line after Verizon botched the Galaxy Nexus so badly in 2011 - this is the first Nexus available on Verizon since then.
(There's a great image of a garage door opening & closing about 2/3 of the way down the page if you don't feel like reading the whole thing.)
The questions he investigated: "Can we figure out the rate at which a propellor is spinning by analyzing this kind of photo? And can we figure out the real number of propellor blades in the photo?"
The rolling shutter is also why stills from gopro videos never quite live up to how clear the videos look in motion.
The cover photo from this month's parachutist magazine is a great example:
Notice the right leg of the jumpsuit, its flapping in the wind as the shutter rolls over the scene.
When people use the slow-mo feature for gopro videos everything kind of morphs rather than moving naturally. I've always found it to be a cool effect:
Exposure is the total time our whole light sensitive area is exposed to the light coming from our scene. You can think of it as an integral of the sensor (or film) area exposed as a function of the time, divided by the total sensor area.
In the examples he uses the term exposure to describe the total scantime of the sensor, whilst it seems that his actual exposure (which is equal to the time each row of pixels samples the scene) is much smaller.
It may sound as a small difference but if one wants to reproduce the effect, we will essentially need to match two parameters: exposure and scantime. While exposure is easy to set, scantime is pretty much hardcoded and depends on the physical characteristics of the camera. Even an analog shutter has a scantime on small exposure times.
Photo-finish shots also end up looking pretty weird:http://coachdeanhebert.files.wordpress.com/2007/08/100-photo...
This effect was manipulated to extract more information for this: http://newsoffice.mit.edu/2014/algorithm-recovers-speech-fro...
They put the "Retina" display in the iMac. This means people will buy it. Higher volume means whoever (LG, I think?) is manufacturing the screens will have to produce more, driving the cost down. That means they will sell variants. Then their competition will also sell competitive options because nobody will want 1080p on a computer screen anymore.
Monitor technology has been stalled for years. This is going to be a gigantic kick in the pants to the industry!
In other words, buy a Retina Cinema Display, get the computer to power it for free.
The scrolling transition at the top of this page really impressed me.
"4k monitor": 3824 x 2160
"5k monitor": 5120 x 2880
That's a lot of pixels.
It might be a good idea to be skeptical about spending >$1,500 on a 27-inch monitor in Q4 2014. It's difficult to notice any pixelation on a 27" screen at a resolution of 2560x1440, so clearly the reason to upgrade to 5120x2880 is for the extra screen workspace. But unless you have very good vision, you're probably not going to be able to read text at 5120x2880 without zooming. What's the advantage?
For $1,000 you can buy two 27" 2560x1440 monitors, which is a huge amount of workspace. Also, a single $300 midrange GPU can drive both monitors at full resolution. A couple years ago, that was cutting-edge tech, but it cost ~$2600. Also, two monitors offer a better user experience than one monitor, since window management is a bit easier.
Would anyone mind explaining whether the pros of a 5k monitor outweigh the hefty pricetag?
Are we 3.5 years, or 4 years now since the original retina macbook was introduced ?
It takes roughly 17.2Gbps of bandwidth to drive a 4K @ 60 fps signal in a single stream (Single Stream Transport); DisplayPort 1.2 has just enough bandwidth to support a single 4K @ 60 fps SST stream, but 5K is far too large for the standard. This iMac comes stock with an R9-M290X(2012 GPU) which supports up to DisplayPort 1.2. To get the bandwidth needed for 5k@60hz on DP1.2, Apple would have to overclock the DisplayPort signal by 50-100% on single stream transport.
It seems like the M295X upgrade is a necessity for this thing to render well.
Most media consumption is done on 1080p or below, not everyone is fortunate to stream 2K or 4K content yet and we are pushing to 5K.
Many computer users suffer from eye strain because they have to stare to resolve the information; their eyes dry and then each blink causes tiny scratches which over time causes serious damage.
Highly peronal: I hate all the marketing retina HD bla bla shit from Apple, but I love these beautiful iMacs. I want one :-).
MacBookPro 13'' Retina - choppy scroll in Chrome
So, the more advanced, 4x higher pixel count laptop was the worst in speed.
Gates rightly (and self-servingly) also points out that Piketty does not consider philanthropy as a means to correct some of capitalism's imbalances. Here's a few of Gates' conclusions:
> Piketty is right that there are forces that can lead to snowballing wealth (including the fact that the children of wealthy people often get access to networks that can help them land internships, jobs, etc.). However, there are also forces that contribute to the decay of wealth, and Capital doesnt give enough weight to them.
> I am also disappointed that Piketty focused heavily on data on wealth and income while neglecting consumption altogether. Consumption data represent the goods and services that people buyincluding food, clothing, housing, education, and healthand can add a lot of depth to our understanding of how people actually live. Particularly in rich societies, the income lens really doesnt give you the sense of what needs to be fixed.
> Pikettys favorite solution is a progressive annual tax on capital, rather than income. He argues that this kind of tax will make it possible to avoid an endless inegalitarian spiral while preserving competition and incentives for new instances of primitive accumulation.
> I agree that taxation should shift away from taxing labor. It doesnt make any sense that labor in the United States is taxed so heavily relative to capital. It will make even less sense in the coming years, as robots and other forms of automation come to perform more and more of the skills that human laborers do today.
But rather than move to a progressive tax on capital, as Piketty would like, I think wed be best off with a progressive tax on consumption. Think about the three wealthy people I described earlier: One investing in companies, one in philanthropy, and one in a lavish lifestyle. Theres nothing wrong with the last guy, but I think he should pay more taxes than the others. As Piketty pointed out when we spoke, it's hard to measure consumption (for example, should political donations count?). But then, almost every tax systemincluding a wealth taxhas similar challenges.
Like Piketty, Im also a big believer in the estate tax. Letting inheritors consume or allocate capital disproportionately simply based on the lottery of birth is not a smart or fair way to allocate resources. As Warren Buffett likes to say, thats like choosing the 2020 Olympic team by picking the eldest sons of the gold-medal winners in the 2000 Olympics. I believe we should maintain the estate tax and invest the proceeds in education and researchthe best way to strengthen our country for the future.
40% of youth in the USA today do not even have that, and as such, have virtually no chance at all of being able to even play in the capitalist economy. They will be successful if they avoid homelessness and hunger, which will be a daily struggle.
Gates, Buffett, and others who claim to be philanthropists work from a point of ignorance, believing that all youth at least have their basic needs covered by this society. They are blind. Piketty at least acknowledges the underclasses.
These so-called philanthropists depend on the existence of a permanent underclass to support their "charity".
I was raised on the edges of this underclass and even today, after 20 years in the business doing quite well, I worry every day that I will soon be homeless or hungry. I have no inheritance to count on,other than debt from my family. The idea of taking a couple of months without income to make a startup is laughable- how would I eat and make my house payment? Most of the peers from my youth are on public assistance, in jail, ill, or worse. They are easy prey for the tech titans, who fill their eyes with glittery visions that make them forget their hunger and cold. This is in the Bay Area, mind you, less than 20 miles from the Valley.
Gates' idea of a consumption tax would heavily penalize these people, who have been enslaved into a consumer lifestyle by the wealthy who exercise control over their lives.
The whole "inequality" debate laments the demise of the middle class. But it never acknowledges the permanent underclass that is a necessary component of a society that can take the time for the inequality debate.
I got out, though you never lose the psychic clutches of abject poverty. Most folks never get out.
What you do see though, in the same Fortune 400 list is that 6 of the top 10 people on that list didn't build their companies (in the sense that Bill Gates did), they inherited them.
I don't know how much the current crop Kochs or Waltons are responsible for the current success of their ventures, but this is the type of thing I believe Piketty had in mind as much as the "parcels of land" approach. Also rent seeking is a problem distinct from a rentier class.
Can you argue convincingly that these people would have been just as successful without the advantages of their births? I suspect that would be quite difficult.
Piketty makes it clear that America is a special case because of all of the 'almost free' capital in terms of land and population growth that had existed over the last couple hundred years. But, he claims that America in the future will more resemble Europe of the last few hundred years.
Also, although Gates claims that half of the richest people in the US have gotten rich from their businesses (I haven't checked if that's true), he almost ignores the fact that most of these richest people have come from upper middle class background, at the least. He also ignores the huge number of richest people who have attained their wealth from financial instruments.
If you want the highlights, this economist article does as decent a job of summarizing 400+ pages as you can hope for in 4 paragraphs.
As for the content, the main take away, is his r > g argument which is illustrated by the following chart:
One of his other big ideas Bill hits on is his tax on captial. Bill proposes a tax on consumption instead.
> But rather than move to a progressive tax on capital, as Piketty would like, I think wed be best off with a progressive tax on consumption.
Maybe not surprisingly since I work in finance, most of my colleges are on Bill's side and not Piketty's here. To be fair to Piketty, he chose a tax on capital because he's coming from a perspective of how do we prevent the accumulation of wealth over generations, where as Bill is coming at it from how do we raise enough taxes to pay for the the services the government needs to provide.
I would recommend reading this book, its clearly a labor of love for him and he's spent the time to back it up with data, just don't expect to agree with all his conclusions.
Bill, you didn't read far enough into the Forbes 400 article, which says:
"We didnt include dispersed family fortunes. Those appeared on our Americas Richest Families list, which came out in July."
That money from 1780 is far from 'long gone.' In fact, most wealthy people started life rich and got richer, largely because they could afford to place bigger bets in life and take advantage of opportunities for labor-free capital gains that simply aren't available to people born without capital.Contrary to Gates' assertion, a 2012 study found that two thirds of the Forbes 400 were born wealthy (http://www.faireconomy.org/bornonthirdbase2012).
1. One guy is putting his capital into building his business2. A woman whos giving most of her wealth to charity3. A third person is mostly consuming, spending a lot of money on things like a yacht and plane
All three actually spend their wealth. The problem with extreme inequality is that really wealthy people can invest, give and spent AND still sit on massive amount of wealth that get past to the next generation.
A progressive tax on consumption wouldn't be enough to fight this.
One core argument of Piketty is that the actual debate has been highly distorted by the massive wealth redistribution that happen in consequence of the first two world wars, something Gates doesn't acknowledge in his review.
[UPDATE] I overlooked the estate tax Gates proposed in his review.
I like how he starts with the things he broadly agrees with. Even just agreeing that extremely uneven distribution is a problem and why, gives us a starting point. I personally take a slightly Marxist view on this. I don't think that extremely uneven distribution is politically stable, or compatible with democracy.
I am slightly doubtful of taxation as a solution. Taxation is stuck really. The problem is that most tax regimes are designed to maximize tax revenue while minimizing damage to the economy.
Consumption/sales taxes, income taxes and other middle class taxes are convenient in that they are very hard for people to avoid and they don't affect behavior much. marginal income tax of up to 60% is generally assumed to have a negligible effect on how much people work.
A 1% annual tax on wealth equates to a $10m annual cost of living in a country for a billionaire. Would they move (themselves and/or their wealth) to avoid it? Can some of that $10m be used to find ways of avoiding the rest of it?
I think that ultimately, wealth accumulation needs to change in order to change the structure of the economy.
Also, I like that Gates considers cultural norms, not just policy. What Gates & Buffet have committed to is a partial solution. If 20-30% of billionaires do this, it might be enough to change overall distribution somewhat.
In any case, more questions than answers.
I agree with generally all of Gates' thoughts here, but there are two actors to consider when discussing how to address problems associated with inequality - the government, and those who own capital. There is an implicit assumption by both Gates and Piketty that a government is always powerful enough to control what portion of wealth flows to capital owners, and what portion flows to labor.
The risk with a powerful government that can do this is it can be bought. A perfect example of this is Gates own story with The Common Core. From everything I've read, his own foundation basically bankrolled the lobbying, acceptance, and implementation of this program, much to the dismay of many educators I know. As long as a democracy bequeaths power to its government, there will be moneyed interests lining up to tilt that power in their favor.
The other option is to limit the power the government has to control the flow of wealth. No one wants to buy a democracy that doesn't have any power to protect their interests. What ends up happening is wealthy actors have to figure out other ways to maintain their wealth - consumption in things like yachts and fancy cars goes down and investment goes up. As investment goes up, g goes up because that investment is creating more jobs and more competition for employees, and r goes down because the capital markets become flooded.
I fully agree that we dont want to live in an aristocratic society in which already-wealthy families get richer simply by sitting on their laurels and collecting what Piketty calls rentier incomethat is, the returns people earn when they let others use their money, land, or other property. But I dont think America is anything close to that. Take a look at the Forbes 400 list of the wealthiest Americans. About half the people on the list are entrepreneurs whose companies did very well (thanks to hard work as well as a lot of luck). Contrary to Pikettys rentier hypothesis, I dont see anyone on the list whose ancestors bought a great parcel of land in 1780 and have been accumulating family wealth by collecting rents ever since. In America, that old money is long gonethrough instability, inflation, taxes, philanthropy, and spending.
The USA are somewhat of an edge case here due to their youth. Look at older countries, such as France (where Piketty and I are from) and you'll see a marked difference.
Really? I always considered this kind of spending a good thing for society. Other businesses and people get that money, they create technology, materials and processes that would likely not exist otherwise, there's still tax being paid along the way and we get a bit of cultural heritage as a bonus (castles, boats, planes, cars, the Sistine Chapel, a lot of paintings - all of which exists only because some rich man decided he wanted it).
Taxing luxury is a mistake and a potential slippery slope IMO. IIRC, the US tried it in the past with disappointing results.
Yes, what you invest in you tend to get more of. But, each capital transfer does not destroy the capital. So, if you are buying a yacht, that capital goes to the designer, the builders, the welders, the suppliers, etc. Now, the point is that these people now have their own choices to do something with the capital received. Some of it will go for food, some of it will go for BBQ grills, some of it will go for big screen TV's. And, then the people receiving that capital will make their choices ad infinitum.
(Perhaps a performance artist alighting a million dollars in cash would actually destroy capital. They are undoubtedly more examples of waste)
I guess I can still see the incentives that would be built into a tax system (as they are built into ANY tax system) to alleviate what may otherwise be burdens on government into encouraging more social utility. But, I just wanted to emphasize that it is not that luxury spending has NO social utility, it just diffuses the social utility into multiple second order spending decisions.
I think philanthropy is great, but only few people are doing it. And those who do it, dont do it effectively, do it too late, or focus on the symptoms rather on the causes, which are much harder to deal with. Philanthropy has become an accessory or a career suffix for those have got lucky.
Gates talks about the middle class in China and else where is getting bigger. True, but philanthropic has little to do with this improvement. Aggressive and central economic management, and free trade policy with the US helped above mentioned countries to sustain a healthy middle income class in China, Mexico, Colombia.
This is debatable. For one, aristocracy is not exactly this, as it is a system based on the notion of privilege, not just wealth. Concretely, it often means the same but still, using the word is quite misleading.
Secondly, one may argue that people, and thus families, should have the right to build stuff in the very long term, that is in a multi-generational way. Even if this thing they build is merely a framework of financial comfort. I doubt anyone would question the morality of offering gifts to children. So it's not clear to me why it would be wrong to give a child the means not to worry about money in life, and do it recursively through centuries.
Not that I really disagree with the point here, but I can't help but wonder whether or not the people in the categories he listed exist in large enough numbers to get out of the noise category. Heck, I would think there are about as many literal lottery winners as there are folks in this category.
--- Governments can play a constructive role in offsetting the snowballing tendencies if and when they choose to do so. --
Governments of course could play a constructive role into many things just the way my son could spend more time doing math instead of watching cartoons. The sad problem about human beings is that unless they get something out of it they wont do anything. Government have a strong incentive to tax everyone more and more in the name of inequality and environment but they have 0 incentive to anything about inequality using that money.
I would rather live in a world with extreme inequality rather than a world where government is trying to bring "equality".
I am hoping you posted this not only to express your opinion, but to engage in conversation. And it appears you certainly are.
My criticism with your response is that philanthropy distorts economic resources through a similar mechanism that consumption does.
That is, by dictate.
I believe in the idea that people who are affected by decisions made should have a say in those decisions. This is the value of democracy.
And just like consumption of fine wine and jewelry distorts the economy to produce more of those things, philanthropy moves vast economic resources for what I would call "the pet projects of philanthropists". Typically the people affected by the philanthropic expenditures have no say. More often than not, no democracy processes take place.
A king who lives a modest lifestyle, who spends all his wealth on what HE thinks is just and good, is still a king. And I hold contempt for his arrogance.
Of course buying yourself things you don't need delivers less value to society than commerce or philanthropy. Of course. Absolutely.
Maybe buying yourself things gives you the chance to have unique experiences. Maybe hitting golf balls into the ocean from the deck of your megayacht gives you the relaxing moment you need to figure out how to boost profits by 300%. Maybe doing a ton of blow and driving a Jaguar give you the necessary experiences to write awesome rock tunes that inspire millions. Maybe if Galileo Galilei hadn't bought himself a lump of clear glass in 1609 we wouldn't know about space.
Surely the 4th is the most important - the one who is NOT spending most of their money.
Even if it is just a special sales tax that targets the rich only, discouraging spending by people with money can hardly be a good thing; that's how recession happen and the rich will continue to capture a bigger share of the total wealth. For an economy to flourish we need the wealth to flow from individual to individual. If all the water on earth is stuck in the ocean we'd have serious problem. Same with wealth.
In theory I like the idea of a "tax on consumption", because it suggests a penalty on conspicuous consumption and wasteful spending. In practice, taxes on consumption are seen as regressive, ie. they affect the poor more than the rich. Is there a way to tax "bad" consumption, without penalizing people who "consume" a large proportion of their income simply because they don't have much of it? I'm picturing some byzantine tax code system where functionaries make value judgements about the morality of various kinds of goods.
Gates: "Far more peopleincluding many rentiers who invested their family wealth in the auto industrysaw their investments go bust in the period from 1910 to 1940, when the American auto industry shrank from 224 manufacturers down to 21. So instead of a transfer of wealth toward rentiers and other passive investors, you often get the opposite."
It's difficult to do so, because it's hard to say what can benefit a person who is wealthy already.
I mean I can point to data that suggests that people who earn better wages, get headstart, have libraries in their community, get a college education, etc are less likely to commit crimes, but your wealth probably means that you can afford good security.
I can say that in a more egalitarian society, vaccinations would be free, and thus you are your children are less likely to get diseases, but you have good health care, and so this isn't a huge issue.
So I guess the best way is to point out that in a more egalitarian society with free or low cost education and funding for research, general progress in the sciences would happen faster. You get better medicine, better technology, can potentially live longer. Also such an economy will grow more quickly, and while if there were more stringent taxes you may not gain as much of the pie as you could, a bigger pie would cancel that out.
The interview question on stages nowadays seems to be what does someone believe in that is not commonly believed, and I supposed the out idea now is the people who control production, the people who own capital are not interested in economy growing as fast as it sustainably can. They want a slower rate of growth in order to maintain more control of the system. This idea not only goes against current economic thinking, and investor's chasing of maximum returns, it's an anti-Marxist idea as well. It seems to be happening though. It's why people like Paul Allen and Nathan Myhrvold pour money into patent companies. It's why the joint chiefs of staff beg the Congress to cut funding for old Cold War tank factories every time the military budget comes up, but the billions for useless tanks, or the hundreds of billions for the designed by committee F-35 boondoggle etc.
As Marx notes, something like a "war on poverty" is a joke, since people are not only purposefully kept poor but purposefully thrown into poverty, like during the enclosure of the commons in Europe. A surplus army of labor is a major tool to keep workers from keeping more of the wealth they create.
Sooner or later, the good ship USS Wall Street will inevitably run aground, and the economy will grind to a halt in a way that will make modern Greece or 1930s USA look good. Then it will just be a question of what working class people and professionals do in their new situation. It's not really the working class people, who are familiar enough with reality, who one has to wonder about, it's more the US professional classes, who are more highly indoctrinated than probably any group of people in the world. I hold my mouth in awe as I hear US professionals pontificating about things going on half-way around the world in which they know absolutely nothing about. NPR is ultimately a heavier propaganda outlet than Der Strmer, or Fox News.
It's fine to have a progressive tax. That progressive tax becomes useless if you gut entitlement spending and instead spend on programs or services where the money ends up back in major corporations hands (defense spending, private contracting etc.)
In order for a progressive tax to be corrective it has to put the money to work for people in the lower end of the tax curve.
>>>A rich person, will only spend a small fraction of it's income, so in the end, proportionally, poor people end paying more taxes then rich people, and that's actually something that's hurting poor people here.
^^^This seems to actually be a sensible argument for there being a problem with a simple flat consumption tax.
A solution might be to have a progressive tax on consumption which becomes more meaningful when levels of spending on consumption reaches a level beyond that of the lower 33% of the populace or something like this. The problem would be figuring out how to apply this tax since it couldn't be done through the sales tax as it currently exists.
The problem with this line of thinking is that yachts and planes don't grow on trees. They're built and maintained by people who have jobs (typically well paid jobs) because someone with wealth is paying for it.
So, to me at least, there is only one type of wealthy person who doesn't add value and that is the hoarder.
so in his example, investors and philanthropists have more volatility around the potential effects of their wealth, so the multiplier can be >1, <1, or =1, but the key is that it can be >1, which means that it can be value generating. consumers' multipliers are necessarily <= 1.
as an aside, i'm also intrigued by the idea of economic velocity as an indicator of economic health (as opposed to the gini coefficient, which is a rather static measure) that's tangentially related to the idea of economic inequality. of course, for capital to have a stabilizing effect on the economy, it needs to have a high dispersion coefficient, but that's another discussion.
Also, using 1910-1940 period as an example to show that Piketty is wrong doesn't make much sense given that Piketty's data show that inequality have actually lowered in that particular period.
I don't understand this. I mean you just get poor much quicker when you have less wealth. I understand that you don't stay rich, but it secures your kid's futures.
George Reisman has offered a thorough critique  of Piketty's arguments -- arguing across a range of topics, from David Ricardo's insights in the role and formation of capital to the meaning and value of inequality in both income and wealth.
And when does inequality start doing more harm than good?
One of the problems is that people get different r's. Just look at VC firms. A pension fund that invests in venture capital funds gets mediocre returns: nothing much better than they'd get from an index fund, and often less. VC partners collect 2-and-20 and get to allocate favors (because it can benefit their careers to make decisions that are suboptimal for the portfolio, and they often do). The "real r" in that engine might be higher (if VCs focused on technical excellence rather than their own careers, I think we'd see quite a respectable r) but the delivered r is mediocre. That's just one example.
To go further, and I don't know how to solve this: if you have good relationships with various counterparties (especially, banks) you can get a low-risk r > 15% in arbitrage. Contrary to stereotype, arbitrage is neither risky (it's low in risk, and most arbitrage blow-ups occur because some hotshot trader got bored and started taking unauthorized positions) nor is it socially harmful (it provides liquidity to markets, which is a good thing). It is, however, not open to most people.
There are many things that cause "wealth decay" or normalization. I'll name four. Hyperinflation and violent revolution are the most disruptive (sorry San Francisco, but disruption is a bad thing). Taxation is the smoothest but can be ineffective (loopholes). Wealth management is yet a fourth: at some point, a large fortune has management overhead and, as its owners become less interested in day-to-day running of the money, much of that excess "r" goes to the agents than to them.
As for "r >? g", I'd prefer two things. First: I'd like that everyone have access to the same r, but I don't know how to achieve that. Second, g isn't constant. World economic growth is 4.5% per year. I believe that it could be 8% or 10% with some heavy R&D investment, and with better (and, quite frankly, smarter) people running the world. The all-time record high for world GDP growth is 5.7% in the 1960s, but we have so much more technology, and the shape of economic growth is (while I don't believe in a "singularity" of the theatrical sense) faster-than-exponential.
Even now, we have a world in which programmers (not 10x or 2.0+ engineers, but just regular programmers) become 10-12% more productive each year due to tool improvements. Motivated, ambitious programmers can do 30% per year. The bad news is that it's almost impossible for a programmer to grow her income at any rate near that. In fact, as she becomes more experienced, she's also more specialized and dependent on her employers (or clients) for great projects. They'll pay her pennies on the dollar relative to what she's worth, that charge being for the "favor" of allocating the good work. The reason why 10x engineers only make 1.3-1.5x salaries (until they become consultants, at which point it's more like 2-3x) is that their employers are very good at playing the "we can give you a raise, or we can give you career-positive work" game.
The software economy is at the fore of what's happening to other industries, but people in most sectors are a good deal poorer. We're comfortable upper-working class people complaining about our slide into the upper-middle-working class, but people outside of tech don't have anything to lose.
What we actually need to focus on is g, and r_labor. We want a high r_capital and an even higher r_labor. Sadly, badly managed economic growth tends to make r_labor negative. That happened in the American 1920s with agricultural commodities (contributing to spiral rural poverty, which led to the Great Depression) and it's happening to all human labor in the 2010s.
And oddly chimes very well with my own views on Piketty. (Yeah, me and billg, great minds you know:-)
The Tl;dr is perhaps rd >g is a better formula where d is rate of decay of wealth. And "yes we need a wealth tax, can we make policy to differentiate between good wealth (used for socially beneficial purposes) and bad wealth (yachts, coke and hookers)
I agree but that is a solution to late - "if we have robber barons we should encourage them to be philanthropists" is missing opportunities to use regulation and competition and externality pricing to flatten the profits accruing to monopoly holders and so reduce the amount of wealth horsing in the first place.
That said nice piece, and billg still gets my vote for top ten nicest billionaire.
Capital = purchases expected to generate a profit.Labor = purchases expected to generate work. It need not generate a profit necessarily - e.g. paying someone to mow your lawn.Consumption = purchases expected to generate pleasure, or avoid pain.
I'm sure the basis of these distinctions rest on some idea of social utility - that trying to turn a profit has more social utility than eating a Twinkie. Maybe we should examine that assumption also.
And consider how these notions entangle themselves in practice. A company car intended for non-personal use is considered an asset to the company, and treated as capital. But a similar car used for commuting is considered a consumption item.
Also, you could compare philanthropy with the "foreign aid" that western countries give to poorer ones for decades now. Did it help? In many cases, it made things worse, because the money did not help the people to help themselves, but made them addicted to the aid.
The point is also that philanthropy -- as it might silence the own conscience -- is often the overflow of the overflow. We give, because we have more than we need and than we give what we need least. But what people really need, is not somebody that throws pennies in your hat, so you can buy some old bread -- but what they really need, are equal chances -- to be able to visit the same universities, to have the same jobs and to earn the same money as other people with the same talents.
You might argue: But Bill also fought his way from "rags to riches" -- no, that is not right. Bill already was born in a well being family and visiting visited Harvard College. With such a background, it is much easier to come from rich to riches, as if you come from Uganda slums (or even Harlem).
Think about how people react to smoke alarms versus car alarms. When the smoke alarm goes off, people mostly follow the official procedure. When car alarms go off, people ignore them. Why? Car alarms have very poor specificity.
I'd add another layer of car alarms are Not My Problem, but that's just me and not part of Dan's excellent original talk.
Absolutely this. Our team is having more problems with this issue than anything else. However, there are two points which seem to contradict:
- Pages should be [...] actionable - Symptoms should be monitored, not causes
Now granted, if you're only catching causes, there is the possibility you might miss something with your monitoring, and if you double up on your monitoring (that is, checking symptoms as well as causes), you could get noise. That said, most monitoring solutions (such as Nagios) include dependency chains, so you get alerted on the cause, and the symptom is silenced while the cause is in an error condition. And if you missed a cause, you still get the symptom alert and can fill your monitoring gaps from there.
Leave your research for the RCA and following development to prevent future downtime. When stuff is down, a SA's job is to get it back up.
It changes the mindset from "Failure? Just log an error, restore some 'good'-ish state and move on to the next cool feature." towards "New cool feature? What possible failures will it cause? How about improving logging and monitoring on our existing code instead?"
Example you get a server failure which affects a service, and you begin working on replacing that server with a backup, but a switch is also dropping packets and so you are getting alerts on degraded service (symptom) but believe you are fixing that cause (down server) when in fact you will still have a problem after the server is restored. So my challenge is figuring out how to alert on that additional input in a way that folks won't just say "oh yeah, this service, we're working on it already."
It's hard to tune them so signal to noise ratio will be high.
This is an excellent point that is missed in most monitoring setups I've seen. A classic example is some request that kills your service process. You get paged for that so you wrap the service in a supervisor like daemon. The immediate issue is fixed and, typically, any future causes of the service process dying are hidden unless someone happens to be looking at the logs one day.
I would love to see smart ways to surface "this will be a problem soon" on alerting systems.
I can totally understand why SMBs have rotations. They have less staff. But a monster corporation? This seems like lame penny pinching. Heck for the amount of effort they're clearly putting into automating these alerts, they could likely use the same wage-hours to just hire someone else for a shift. Heck with an international company like Google they could have UK-based staff monitoring US-based sites overnight and visa-versa. Keep everyone on 9-5 and still get 24 hour engineers at their desks.
If you are interested you can also get my point of view from my Velocity talk on Monitoring without alerts. https://www.youtube.com/watch?v=Gqqb8zEU66s. If you are interested also check out www.ruxit.com and let me know what you think of our approach.
-- Marcin, former Google SRE
In a previous position, we had a custom ticketing system that was designed to also be our monitoring dashboard. Alerts that were duplicates would become part of a thread, and each was either it's own ticket or part of a parent ticket. Custom rules would highlight or reassign parts of the dashboard, so critical recurrent alerts were promoted higher than urgent recurrent alerts, and none would go away until they had been addressed and closed with a specific resolution log. The whole thing was designed so a single noc engineer at 3am could close thousands of alerts per minute while logging the reason why, and keep them from recurring if it was a known issue. The noc guys even created a realtime console version so they could use a keyboard to close tickets with predefined responses just seconds after they were opened.
The only paging we had was when our end-to-end tests showed downtime for a user, which were alerts generated by some paid service providers who test your site around the globe. We caught issues before they happened by having rigorous metric trending tools.
Call your bank - most of them have a trade finance desk, or Google the term. A slightly cheaper way of doing it is to have a lawyer draft a terms of trade or supply agreement with either the funds in escrow with the firm or a bank guarantee.
Having your bank finance the trade is worthwhile as interest rates are so low at the moment. Common terms are 60, 90 or 120 day delivery with FED/LIBOR + x% (where x is the risk profile of your business - shop around to get prices) cost over the term.
The agreement would contain delivery timetables, warranties, customs clearance, liabilities, indemnities, guarantees, who pays what, etc. and you are protected legally without handing over tens of thousands of dollars or more upfront. These contracts are protected by law in almost all jurisdictions.
You don't finalize the transaction and pay for it until all conditions have been met and you have the goods in hand and have verified. The seller is trusting an international bank or a law firm escrow, and they should also have a lawyer or bank on their end (a trusted seller can receive the cash upfront for a fee with the bank picking up the risk, although I doubt any bank would finance a bitcoin atm without verified trade volume and a lower risk profile).
Talk to your lawyer and talk to your bank, having expensive items delivered without risking the full cost is a solved problem as old as finance itself and trillions of dollars annually are traded in this way. The extra $1k or so you'd pay on a $25k deal are worth it for the guarantee. If you ship goods regularly the costs are amortized as you can use the same contracts and finance suppliers. Don't deal with anybody shipping $20k+ valued products that doesn't deal with a bank trade finance desk or law firm and is instead asking for a bank transfer.
This is how most high cost goods are traded - from oil and other commodity supply contracts through to companies like Boeing and Airbus supplying planes, or GE supplying turbines for a power plant. Most people without experience lead into it thinking that buying something that costs $25k, $100k or millions of dollars is just the same as purchasing something at your local store just with bigger numbers. It isn't.
Bitcoin multisig transactions, or m of n transactions are also worth investigating as they cut the costs - although there is currently a lack of escrow/intermediaries who are as trusted as major law firms or an international bank. There is a huge opportunity in cutting the costs and fees associated with trade finance with bitcoin.
Orders ship within 2-3 weeks. Shipping to Canada is no problem, customs is no problem. Assembly is in Portland, Oregon, so it's a quick ship to Vancouver. We've shipped hundreds of units.
Setup is completely on the owner's end. Owners don't need the company to configure the unit, and the company doesn't need to run servers for them. Skyhook ATMs use blockchain.com for the wallets (using accounts the owner creates independently), and the exchange price source is the owner's choice, all controlled through the interface. This isn't to avoid having a setup phase - it's to allow owners to have complete control over their money, because then it's a trustless system (the source code to the ATMs is open and auditable: https://github.com/projectskyhook).
I'm astounded at the prices on some of these ATMs. For $25,000 they could have ordered 24 Skyhooks (a little padding for shipping, which I believe can be palleted to save money). Bitcoin ATMs shouldn't cost the down payment on a house, IMHO.
Skyhook is also completely bootstrapped by the founders (no VCs) and a result of that (and careful burn rate management) is now profitable, despite the low price point. The cofounders have been using some of the money (and their free time) to help clean up and build out a new hacker/makerspace in Portland that's going to be awesome. Easily the best group of people I've ever had the privilege to work with.
I wish more people approached startups this way. There's a real pride to making companies on your terms that you control, and succeeding at it. Startups shouldn't be about overvaluing your company and having Kid Rock do a rock concert in your back yard.
There's a working Robocoin ATM at Hacker Dojo in Mountain View, CA. Nobody uses it. Nor should they. 15% bid/ask spread, 5% fee.
Incidentally, the retail price for an ordinary ATM machine is $2,000 to $4,500, depending on the features ordered. A standard through-the-wall bank ATM is about $9,500. Even if you order every option from cash deposit through biometrics, they don't cost $25,000.
Judging from the replies, it seems to have backfired probably because he did not acknowledge the errors on his side (shipping a lemon, unresponsive over email and not having provided a refund yet).
Worse, he doesn't seem to realise that the name-and-shame was a result of his and ultimately RoboCoin's actions.
Edit: The correct response would have been to issue a refund immediately, acknowledge and apologise for the issue, highlight the trouble area (upstream supplier issues or whatever lead to this), and commit to solving or already having solved this problem for future clients + internal review of this mishap.
I've heard this from smaller retailers of all kinds of products and, as pointed out in Andrew's reply, it's complete bullshit. The company that took your money is the company you formed a contract with. Any nonsense about "yeah, but the manufacturer..." is a smokescreen designed to dodge responsibility.
(Wimpy disclaimer: this isn't legal advice and maybe it's not true if you live in Yemen or something, I dunno)
I have the feeling that Jordan knows exactly what he's doing. The amount of $20,000 is a lot but it's not that much once you start to hire lawyers and file a suit or go into arbitration. Sounds like he's stringing them along waiting to see how much more cash they're willing to invest to recoup that $20,000.
Even worse Jordan might be setting himself up for a settlement of less than $20,000 where he keeps some of the money but knows it's a better offer for Andrew and Rajiv than hiring lawyers.
Been there. Done that. I'll never look at a purchase or legal document the same again. Jordan's got the upper hand here.
Really, this story does totally suck for you guys but there should be absolutely zero trust or respect for anyone selling anything related to bitcoin at the moment. You're just begging to be ripped off.
out of interest, what was the expected ROI?I mean, you sunk $25K into it.even if you were charging 5% fees - you would need half a million in transactions to break even, and that isn't including the monthly rental fee for the location.
is there really that big of a demand for BTC in a pub?
> Were prepared to take legal action, but we figured wed give Jordan a taste of internet justice first.
IANAL but, I don't think you're helping your case by seeking internet justice.
If I purchase this with my credit card and don't get the product before the price goes down, I have the option to cancel the order or get the lower price. If the product gets stuck in RMA or support is not as promised, I can chargeback.
The 'advantage' of bitcoin is that now I have to deal directly with the vendor, who is not my advocate.
It's the repeated false promises and claims he makes in his e-mails that create this impression, for example the quick refund. A serious businessman only promises this if he can make absolutely sure that the customer will get it (and it doesn't matter that he wrote "we hope to ...", it's a clear message). I've also read Jordan's reddit posts, sounds fishy as hell.
That being said, it doesn't really surprise me that people with that poor of a user experience have poor customer service.
What is it about the bitcoin market that seems to attract less-than-ideal business practices?
It's not going to be government regulations that kill Bitcoin, its the associations with illicit drugs, child pornography, and dishonesty of vendors like Robocoin.
"Hey [omitted], http://www.reddit.com/r/Bitcoin/comments/2jakg4/the_great_ro...
It was really poorly handled. I take full accountability." -Jordan
But let this be a reminder that you should never pay the full amount up front.
Hope you will get in court with that smarmy Jordan Kelley.
I have no idea why one would buy a Robocoin machine when there are a number of evidently superior alternatives.
That guy Jordan deserves to be brought in Justice.
2. Expecting to pay $25k and pressing a button and then making $2k per month for eternity is NOT wise.
Problem with fusion research like this is that the closer you get self sustainment or energy generation, the harder it gets and problems pile up. This project looks like many other similar projects that have gone bust. They start by solving the easiest problems first, get some funding and hit the wall.
The main problem with any reactor design is how to handle the 14 MeV neutrons produced by the fusion reaction (no mention in the article). They tend to damage the reactor and make it economically unfeasible. At this point being able to create fusion reaction is not the main challenge. It's the sustainment and economics of limiting the damage. If they really have solved all the problems and demonstrate economically sound fusion in 5-10 years, they will be handed Nobel price in physics for sure.
sigh To the extent this is true I suspect those "large fusion reactors" are tuned not so much for generating electricity and a great deal for annihilating whatever the carrying missile is pointed at.
But never mind fuzzy thinking at Reuters right now. This is amazing news if holds up. Fingers crossed.
This is not the first time they have gone public with this - Charles Chase gave a talk at Google X last year, recorded and publicly-available:
A typical thermal power station has an efficiency below 50% for electricity generation, so the plant dissipates at least as much heat as it generates electrical power.
I wonder how you could get rid of 100MW of waste heat from a volume small enough to fit on a truck. That's a heat flux of more than a megawatt per square meter of surface area.
That's the standard issue joke. "Nuclear fusion has been just ten years away for the last fifty years"
It is so common as a joke I'm surprised the article didn't mention it.
It is however another great example that there is money going into lots of different fusion ideas. And that can only be a good thing as far as I am concerned.
I don't think so.
Never mind, another linked article says that the injectors are only used for ignition.
So that would power 50k to 100k typical houses in the US... not bad!
I didn't find much to inspire confidence.
No doubt that Skunkworks is world class... but claiming a "breakthrough in fusion energy" before a prototype has even been built is pretty bold of them.
Im sure Ive heard that somewhere before
Both have publication date 10/09/2014. Maybe connected with the press release?
These reactors will be similar to the Nimitz-class aircraft carriers and potentially only require refueling every 25 years through a process known as ROH.
100MW capacity in the size of an international shipping container? The implications of this are massive if this technology can be brought to scale, and that is the key term - SCALE.
The cost of solar is plummeting and by the time fusion technology can produce 10% of our energy demand the cost of solar will be heading to $1/Watt, battery storage will be competitive and that is hard to beat even if the footprint is only a fraction of a solar farm.
 http://aviationweek.com/technology/skunk-works-reveals-compa... https://en.wikipedia.org/wiki/Nimitz-class_aircraft_carrier#...
> Lockheed shares fell 0.6 percent to $175.02 amid a broad > market selloff.
Fusion Power GeekOut:http://www.dotnetrocks.com/default.aspx?showNum=1013
Fusion Power GeekOut #2:http://www.dotnetrocks.com/default.aspx?showNum=1022
Cold Fusion GeekOut:http://www.dotnetrocks.com/default.aspx?showNum=1037
Shouldn't the market be a little more excited about this?
Many companies have tried to set up a skunkworks since, but didn't have the guts to run it like Kelly did, and didn't get the results, either.
(And then it rarely happens)
The team acknowledges that the project is in its earliest stages, and many key challenges remain before a viable prototype can be built
This falls under the xkcd 10 year plan:
"we haven't finished inventing it yet, but when we do, it'll be awesome"
Ah science reporting.
"Partners in industry and goverment". Translation: 10 years to product (but hey, prototype in 5) so please give us lots of dough. It's the technology of the future and always has been.
I'm guessing this means that they will try and maximize profit rather than maximize cheap and clean energy for the entire world. That is a bit disappointing if true.
I find that to be an interesting idea, since the average will only increase over time.
EDIT: And here's Anandtech's article on Project Denver http://www.anandtech.com/show/7622/nvidia-tegra-k1/2
The size thing is what I'm interested in though. I'm waiting to see if there actually is a 12" iPad pro tomorrow, otherwise I'll pull the trigger on a Note Pro. I get the pocket/purse argument for 9" but my iPad has become pretty much a replacement book. Between 1dollarscan, Oreilly's drm free ebooks, and my pdf library of papers and data sheets, and various magazines, nearly all my space is being consumed by reading material. So for me (weird case I know) it is my library in my hand, and I really would like it to be a 12 - 13" screen.
That said, the Nook HD+ is my 'budget' 9" Android tablet that is my 'look up things' / 'play music' / 'cast netflix' device and this could easily replace that.
The thing I loved the most though was keyboard cover. It's narrow but you get used to it very fast. Typing on it was such a pleasure! I was not allowed to travel with this device but I could easily see myself just taking N9 instead of laptop for short trips.
> Googles Nexus 9 goes up for pre-order on October 17th, and should hit the shelves on November 3rd. The 16GB model will go for $400, the 32GB for $480, and a 32GB model with LTE built in will set you back $600.
So finally having a good 4:3 Android tablet is good news for readers :)
That said, I hope this sells well. HTC badly needs some success.
"Memory: 16 GB & 32 GB." Google, of course, means storage.
I'm kind of curious what the secret sauce is that makes Google think they are ready to sell a premium-priced tablet. Lollipop may be better than iOS 8 but is that enough to overcome the app gap?
(Anecdata: my wife had her ipad mini stolen last week. We have 5 Android devices in the house, but as long as Civilization is only available on iOS this isn't a serious contender for a replacement.)
I enjoy the Nexus 10, but I really feel like I've kind of just been left in the cold. Android updates haven't been kind to the device in my experience.
Is battery life worth it?
I'm writing the arm64 Go compiler. I wonder if it's feasible to start testing on this machine too (e.g. how open it really is, how hard is to get a sane working environment, etc).
Really I could carry that in one hand? That's amazing! Know what else I can carry in one-hand? My fucking 17 inch macbook.
We have 3 apps over 32 servers and 5 environments, and operationally it's like pulling teeth. This has the chance to change everything!
"Microsoft Corp. and Docker Inc., the company behind the fast-growing Docker open platform for distributed applications, on Wednesday announced a strategic partnership to provide Docker with support for new container technologies that will be delivered in a future release of Windows Server."
I strongly suspected that Windows Server vNext would have some sort of 'container' support after the wild success of Docker.
Congratulations to MS though, I think this is a good initiative. Not sure why you even need a partnership with docker TBH as they didn't create the underlying OS technologies that make containerization possible on Linux.. But, with the acquisition of Mojang it is apparent MS is placing a lot of emphasis on acquiring mind share.
Secondary, a native Darwin Docker server would be killer.
The mobile implications could be huge if this made it out of Windows Server.
Not sure this is a good idea for Docker though. Doesn't it mean they lose their focus on Linux? Seems to complicate things a lot, and potentially introduce conflicts when prioritising what to do next. Simple is good. Serving a closed source operating system looks unwise, given the nature of their business.
I wonder if Steven Sinofsky would have been game for this?
A lot of Windows developers use commercial Windows for development (i.e. Win7) with these features I anticipate more developers using (the more expensive) Windows Server.
If your build output could be a container (VS build process?) that you can ship, or as in a lot of "enterprise" organisations pass on to QA / UAT then this is a big deal and a massive huge step.
I do realize corpo-world is filled with Windows Servers now but it seemed Linux/Docker could change this with containers as a 'standardized server app format' with super easy provisioning process.
Now since Windows will get more or less the same - Linux/Docker and Windows/Docker will compete on tools and raw perf.
In order words, will there now be Windows images and Linux images, and Windows images will run on Windows hosts and Linux images on Linux hosts?
Good luck to Docker.
Could you be more specific, what "features" precisely? Windows NT already takes a lot of concepts from traditional UNIX kernels and builds on them (unlike, for example, Windows 9x).
> From what I remember, Windows Server is already a step in that direction, but Microsoft hasn't advertised much of that functionality so far, maybe in order to maintain customer lock in.
Could you be more specific, I know a lot about modern Linux and Windows Server, and that comment is mysterious to me. Are you talking about the deprecated UNIX Services for Windows which has existed for well over fifteen years?
Is it something like an integration of Mesos, Fig and Docker?
Otherwise existing Dockerfiles will not build on Windows - right?
Docker people will spend more time fixing and making bugs due to Microsoft, the documentation will become a mess (it cant possibly be same documentation for that different systems), the Linux-side of docker will be more mediocre compared to what it could have been if all effort went to it.
Remember Internet Explorer 6, Visual Basic, the horror that is Excel and the whole Office suite, Asp.net, Windows Millenium, The attempt to kill Linux by Microsoft through SCO.
I actually installed a bunch of other iOS Reddit apps to see if I could find one that supported sidebars better, so that I could officially recommend that app to my subreddit instead. The other ones didn't support sidebars at all though!
What I want most is for Alien Blue's tiny arrow buttons that lead to subreddit sidebars - http://i.imgur.com/ygWOV91.png - to be detail disclosure buttons instead, with a tiny "i" (https://developer.apple.com/library/ios/documentation/userex...). I believe that icon makes more sense and will help readers realize they should tap there for sidebar information.
I was under the impression that Apple allowed transferring apps between accounts without wiping out the history
...and here's Jase's (the developer's) announcement on /r/alienblue...
Android version in the works?
What about Windows phone?
iPad version? Do I pay for it?
How to transfer settings to new app.
I think it'll just become Reddit for iOS. The default, official client. The one 90% use.
After an acquisition like that, I ask to myself :
Does Reddit have smart developers to build a mobile app like that?
Why do they need to do an acquisition to get a good app?
The questions seem trivial, Reddit has great developers, but looking forward the last acquisition by Apple( that never done before), I think that in the next future the company employees will be no single person, but startups.
Thinking back about 8 or 10 years when "social networks" and online communities were becoming something obviously substantial there were a lot of excited ideas about what they would become. Ning's idea of basically extending forums and creating lots of online communities seemed attractive. When they're first taking on online communities are exciting. But, they seem to age. It's almost like being in the same conversation forever.
What can Reddit still accomplish?
What's the future for that?
Necessary move by reddit. If someone was going to be able to compete with reddit, and succeed, Alien Blue had a very good shot.
Cool to see reddit making moves since their new round of funding, too.
Semi-related segue: Since this $50mm funding round happened, I've recommended to a couple people that they apply for jobs at reddit. 
Reddit has maintained impressive growth since 2005. I'm expecting them to be doing very interesting things over the next 5-6+ years.
Presumably there are more popular unofficial reddit apps out there when you start counting Android.
Everytime I approach Reddit, I got a feeling of being overwhelmed by the interface complexity, to the point that a part of me wants to know more and finally get to use it, and another part, simply feels frustrated. The latter has always won so far, even after forcing me to create an account.
It just doesn't click with me. And I don't want to read no freaking 101 guide, because a clearly done interface needs no guide.
Anyone feels like that?
It's the absolute worst place to post anything you want to say. I don't know why people keep using it.
Reddit also likes to spam top youtube videos with solicitation attempts to get people to go to reddit.
Only about 6 months ago I bought Alien Blue + Pro, now I have to do it again.
It appears that if I get it this week I will probably be able to save $4, so that's good, but I feel a bit scammed by the iAlien developer that I don't trust some of these apps anymore. I do trust that reddit is a good thing, so I'm probably set from here on out.
And, in the end, I know it's only $8, but I hate being scammed, it's the principle.
Alien Blue is a good app so I'm happy for Jase the developer and I forget why I originally chose iAlien over Alien Blue now, but hopefully we get some good advancements in the mobile interface which is my primary means of accessing reddit.
EDIT: Adding links to controversy here as well:
Jesus, one trillion passphrase checks a second.
Well I know what I'm changing this afternoon.
> My personal desire is that you paint the target directly on my back. No one, not even my most trusted confidant, is aware of my intentions and it would not be fair for them to fall under suspicion for my actions.
Snowden has always had my respect but the more I read the more he has my admiration as a person.
(and of course Manning who will be left to rot for the next 35 years by each president)
"Hypervisor (Hypervisor.framework). The Hypervisor framework allows virtualization vendors to build virtualization solutions on top of OS X without needing to deploy third-party kernel extensions (KEXTs). Included is a lightweight hypervisor that enables virtualization of the host CPUs."
Any news on if anyone is actually using this yet? Stability matters a lot more to me than raw performance in VMs, so I'd be very keen to know if Parallels/Fusion/VirtualBox have adopted this--assuming that it would actually improve stability or, if not, what the pros/cons are for using Apple's own Hypervisor over a third party's.
I know these reviewers cannot answer this question, I just want to point out that it's the only relevant question for me. Given Apple's track record, this release will most likely cause my MBP to crash more often, but I want data on that. I want a review to actually explore this angle as opposed to simply talking about features that honestly mean nothing to me.
Apple's mobile OSs have a way of obsoleting older hardware. I'm curious to know if their desktop OSs are trending that way as well, or if they're making performance gains instead.
Learned a cool trick tonight: Yosemite was taking a while to install, so I did some googling and learned you can see the installer's log by typing CMD-L during the install process.
Near the bottom of page 3, just thought it was funny considering today's 5K iMac announcement.
Anyway, it did help me know what to expect in Yosemite so thanks John.
I also discovered the "purple" full screen button from yesteryear - I much prefer that to the fullscreen arrows in Mavericks, and dislike the new default "FULLSCREEN" behaviour of the zoom button. Fullscreen makes the menubar and all that sits in it (MenuMeters, clock etc) useless. On a laptop, the indicator about the battery is kind of important to me, and I don't find the clock distracting or require it to be removed in order to help me read text on other parts of the screen. I think it is a foolish move.
What is funny about Yosemite, many dialog boxes remind me of KDE.
They've done two things wrong with Spotlight. By moving it down from the top of the screen, that immediately reduces the number of results that can be seen. Then if that wasn't enough they further limit the quantity of visible results by not allowing results to flow to the bottom of the screen. A double whammy if you will.
I can live with a slightly slower experience (yes, my indexing is done) but reducing the result count for absolutely no good reason is unacceptable.
And yes, I know I can scroll down.
Edit: Oh and while I'm complaining, please tell me which one is selected: http://i.imgur.com/Szj3Yag.png
Now no dice... anyone know a way to keep the screen off with the lid open?
To execute in Terminal:
sudo nvram boot-args="iog=0x0"
To undo in Terminal:
sudo nvram -d boot-args
Once you type it into terminal I believe you need to enter your password. I then restart my machine. Now the TRICK is to either restart your machine with the lid already closed (hit restart then slam the lid!) OR turn the machine on for the first time (then quickly slam the lid!) once you are past the login screen you can open the lid.
I really don't need the grays/white/blacks of past operating systems. The initial setting for my quick bar just looks horrid, little icons on a dark gray background.
Everything looks so 16bit. I understand it bleeds through the background color, I would just prefer to have no background on the dock and have the icons float
I was quite impressed by 10.10 from the Keynote a few weeks ago, and I'm looking forward to experiencing some of that. No iPhone so can't enjoy that level of integration, but perhaps my iPad will be happier.
Meanwhile I have a Nexus 5 on order, and I'm debugging problems with my Linux PC's new motherboard. Certainly Linux on a roll-your-own hardware platform is a different world from the slick, smooth Apple experience. I like both for what they can do but the Apple is becoming my go-to front end while the Linux machine is becoming more of a server and back-end tool.
Dock, is 2d until you roll over it, then icons pop out of it and it looks like it is 3d.
This is it, Apple is the new Microsoft. Frankly, and I can't believe I would ever be saying something like this but Windows 7 now looks better and more consistent.
POODLE seems to be a padding oracle based on SSL 3.0's inability to fully validate padding. The oracle only gives you the last byte of a block; a full extended padding oracle gives you successive bytes, but this vulnerability doesn't. The authors sidestep that problem by using application-layer control of the boundaries of blocks to repeatedly line up the byte they want to learn with the last byte of padding the vulnerability reveals. C l e v e r !
This attack, however, seems to.
We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly. It makes me want to find such and firmly tell them that there are vulnerabilities in TLS 1.2.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:EECDH+RC4:RSA+RC4:!MD5; ssl_prefer_server_ciphers on;
SSLProtocol all -SSLv2 -SSLv3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
(Except please note for the purposes of this question I'm assuming as a given that cutting off SSLv3 is considered preferable by the entity in question to a very weak SSL negotiation. Whether or not any given entity should have that opinion is a different question; I politely ask that you get into that question elsewhere.)
I'll continue to grow the list the more I see/read.
"If you are encountering trouble with inbound Twilio requests while mitigating the SSLv3 vuln, contact firstname.lastname@example.org for direct help."
(That is, they have to manually enable TLS on your account.)
Also, if you're using GET requests with ExactTarget, you'll run into the same thing, but I haven't heard back from them if / when they'll have that fixed.
RC4 is mentioned in passing as having weaknesses, but is it actually broken? If we can't disable SSL3 completely would using only RC4 ciphers be an option?
This suggests to me that a possible workaround could be to detect this attack because it will generate the characteristic pattern of a successful record amongst many invalid ones, and then expire the relevant cookies; by the time the attacker has figured out a byte or two, the cookie has already become useless. It could potentially turn into a denial-of-service, but that's something anyone with MITM capability can do trivially anyway.
I've seen people post figures like 0.85% of HTTPS connections have been SSL 3.0 and was wondering how those figures were compiled.
As a general rule, review your logs before disabling things. And ask your users to use modern browsers as soon as possible.
An old writeup of mine on TLS downgrade, if anyone's interested.
Already using the beta, it is very stable.
You can also set security.tls.version.min to 1
In Chrome set the command line flag --ssl-version-min=tls1
the real question is why it took major site ops this long to realize. given a trove of handshakes (which Google has been saving for years), user-agent headers, and expected ciphersuites, it perhaps should not have been too difficult to detect downgrade attacks in the wild. that doesn't in itself give you POODLE, but it probably offers some clues...especially given other information available to them.
I guess I'm just not the average user, but the Chromecast handily beats what this (and other options in this genre, i.e., Apple TV / Fire TV / Roku) have to offer. I don't want another remote control. And making my phone the best remote control is an awesome solution. And the Chromecast doesn't take up any room in my living room / entertainment center. And it's only thrity-five-goddamn-dollars.
I guess the downside is that I can't play crappy games on my big screen. Darn.
I don't care about what people saying about wifi having gotten "better". By every single measurable criteria, it is slower, is less reliable, has lower capacity and higher latency than wired gigabit ethernet and I doubt that will change anytime soon.
I demand wired ethernet on my devices, and I know a bunch of other who do too.
My 4-year-old Google TV box (Logitech Revue) -- with Android 3.1 -- is really showing its age. The app store is pretty empty and it only supports things like Amazon Video and HBO Go because Android still had Flash back then.
The missing key is an HDMI input. My TV is always tuned to the Google TV input whether I'm watching live cable TV, a Netflix movie or casting a YouTube video. I have a single remote control (the Google TV one) for all of them. It changes channels and settings on my cable box with HDMI CEC.
All these new boxes make you switch inputs and remotes all the time. I have too many remotes already.
The whole UI feels very snappy, and videos load very quickly.
The game pad feels great in my hands. No complaints there.
The selection of games on Google Play isn't huge (yet?). I currently see 16 games listed on it for download/purchase. My favorite so far is Leo's Fortune. I enjoyed it on my Nexus 5 when it came out, but after playing it on Android TV, I won't even play it on the phone anymore because I've experienced how much better the game is with a controller. I suspect that's going to be the case with a lot of games that come out for Android in the future. Touch interface only games have a lot of limitations.
Besides Netflix, you can also use Plex (PlexPass subscribers only right now) and that works pretty flawlessly as well.
I've been very happy with the whole setup and I'll be recommending it to all who are in the market for a set top box.
OUYA was almost dead. Now it's done.
Pretty much every mobile game I have ever tried has been a cesspool of micropayment dark patterns, or else something that really just serves to kill time when there is nothing else to do (riding the bus, waiting at an office, etc.)
Is there really a big market for this thing?
I'm somewhat surprised there is no ethernet in this player. For some places, wifi just doesn't work in their environment.
I've tried nearly every android TV stick, and while they are pretty close to what we need, and infinitely customizable, we had trouble getting consistent hardware. It seemed like each batch would behave slightly differently.
Is this just a case where Amazon got a reference implementation out the door before Google did?
I hope this doesn't mean there backtracking on Chromecast.
I'll have to wait and see how the performance for the Google Cast is. I'd like it to at least be as good as my ability to airplay HD MKV files to an Apple TV.
What? Are they exclusively targeting movie content with this? It doesn't make much sense to rent a movie if store apps don't have a feature I am looking for.
I just don't get it. Does every company have to jump in head first to any emerging market just because the others are doing it?
The Android TV interface looks pretty sleek as well.
edit: ha... probably market limited.
Roku is dead, long live Roku.
Spotify and iPlayer are my favourites. Notably absent.
It's pretty sweet and if it can set a standard for phones then we'll see carriers become true utilities between which customers can switch easily if they get poor service. In short, pretty awesome.
Today's carrier system kind of feels like this dinosaur, like a landline... archaic and unnecessary. With so many people travelling, changing places, changing technology etc, it makes a lot of sense to move from subscriptions to short-time payments, and eventually, pay-per second of use on the fly, directly, without an account or monthly statement, with a push payment instead of a pull payment. (digital currencies being a key element here). Anyway, getting a bit too off-topic here, but cool first move by Apple for sure!
The linked article suggests it's a method to save space in the hardware design so the SIM is not user serviceable. However, it also forces you to only buy data service from Apple approved carriers. Notice in the screen shot from the OP's article that Cricket or any of the more affordable MVNO's are not available as options.
This change is just as much about control over where you spend your carrier dollars (and, possibly, Apple getting a kickback) as it is about saving space.
If you want a new SIM card, and you don't have a contract, just buy a new SIM card and put it in your phone / tablet.
If you have a long-term contract, this won't help you anyway.
Where is the catch? (Sorry if I am sounding stupid)
I don't need to ask my provider or hardware OEM for permission. I have control. In the Apple scenario I'm giving this up. I vaguely remember some wrangling with GSMA over this.
Ultimately, as with many Apple products, we'll be trading control for convenience.
As others have pointed out, multi-IMSI SIM cards are nothing new, though points to Apple for getting the network carriers on board and sharing keys.
Getting a replacement SIM card is not a problem (certainly not in the UK). Most of the networks will happily post you out one for free and you can buy them for virtually no money in all phone shops, most supermarkets, market stalls... everywhere -even in this tiny backwater technophobic village where I work.
My worry is that this is the start of a path down to devices having embedded SIM cards that are not user replaceable, or even have no SIM at all and just use the secure storage module built into the chipset. This seems like a bad hole to be heading down as it would directly take choice and power away from the end user.
I created a little project that allows remote usage of multiple SIM cards as a Software-SIM on MTK based Android phones. Forward the commands via TCP from a modified Baseband-firmware.This means you could e.g. have multiple SIM cards for your business trip without changing the card in your phone. Also malicious people could steal your SIM authentication if you use a vulnerable Android phone and use it:
so i doubt you will be able to activate those hardcoded sims with any plan that easily. i doubt you will even be able to activate it without apple help.
but of course, im just guessing. have no idea if that is the case.
AT&T is by no means perfect but given all the traveling I've done and my experiences with Verizon and T-Mobile (never tried Sprint) I've found that AT&T and Verizon are interchangeable and T-Mobile is not quite on the same level.
I know my rate plan hasn't become more expensive when I upgrade so how can I expect that it will become cheaper if I bring my own device?
It makes mobile devices really "mobile". Customers do not have to physically enter a local carrier store to add a new line/data-plan or transfer to another carrier. It might save a great amount of time and efforts especially when traveling overseas.
Hope it came to iPhone in near future. Due to the easiness of switching carriers, hope it would help bringing down prices.
It baffles me.
I have an iPhone 5 with Sprint, and I thought I was more or less locked in because they used CDMA. I thought I would need a new phone if I wanted to switch to anything else.
Would love to know I've been wrong and can switch without buying out my contract and buying a phone...
Edit: I meant it as a future possibility
jfc, I can't believe this is just now being fixed. This has to be the most infuriatingly stupid thing about android.I don't have hopes of them adding the ability to intelligently switch from a weak wifi signal to a strong cell signal, but this is a step in the right direction.No more assuming I have no new emails/texts when I'm in an airport because my phone quietly joined a wifi network and is waiting for me to open a browser and log in.
As a developer not initiated to the Android platform, the second half of this sentence is a very scary thing to read.
I have read reports from people who have tried the developer preview, however their anecdotes vary so wildly (e.g. 10-60% improvements) it is hard to believe any of them. Need something more scientific than people's vague "I got more hours today than yesterday."
My guess is it will require a re-flash back to KitKat so that Lollipop can be auto-upgrade over the air. In which case I might as well get that going now...
Catching up to Apple but I'd love to see this across laptops as an app or through Chrome.
Nexus Player is very interesting though - but again it all depends on the price which they haven't mentioned.
As people below have pointed out they changed the page to include the Nexus 4 after my post.
I don't understand why people keep saying that it won't come on Nexus 4.
1. Needs a very long-lived session to be convenient. Elsewhere they note their's is a whole year. That's a long time to go without reauthenticating a client!
2. Authentication is or should be a much more common event than recovering a lost password, and now that's totally dependent on your email provider. One concern is latency.. a minute can feel like an hour while waiting to log in to your account to do something urgent. But also worrisome is provider downtime, spam filters, etc. all can block you from accessing your accounts.
Of course the way they "deal" with #2 is by just trying to avoid authenticating you very often (#1), which is not a generally-applicably awesome security practice. Might be ok in some cases but I wouldn't classify that as an overall "better" way to sign in.
I think a better way to solve this is at the browser/OS level with built-in password generation and management. And that's actually a third drawback to this approach.. it's incompatible with password managers.
One could bridge that gap by adding two headers to the authentication emails - one containing the URL where the sign in request originated, and one with the sign in URL that must be visited.
A browser extension could then check your emails, and if an incoming mail matches the sign-in page of the current tab, log you in directly.
I will admit that not being responsible for storing passwords was one of the reasons I used it. I'm by no means a security expert; one less thing I can screw up seems like a major plus.
Some say email is already like that but it isn't with services using two factor authentication.
I don't think there is an easy and intuitive way to get rid of passwords without involving some sort of physical component that stays on yourself.
When authenticating, the browser could just send the user's public key, and if a user with that key is in the system, it replies with a session key encrypted with the user's public key. If browser companies would get their act together, we wouldn't have as many authentication issues as we do today.
1. Go to login
2. Forget password - click reset password
3. Go to email, find reset password email
I wouldn't really mind if this became more common. I don't trust password managers (and access the internet from so many different devices that the only common thing they share between them is that I can access my webmail client or email on my phone.)
In mobile the sign up flow can even be more streamlined using deep linking:
1. User enters email address.
2. Opens email client and click the link.
3. Link contain app specific schema myapp://login?token=cold_fish etc.
4. App opens and verifies the token with the server.
5. User is logged in.
User has to enter only email address as opposed to email + password (and sometime password confirmation)Then only needs to click a link in email client to sign up.
(Hashed) Password storage is moved to a third-party database (the email provider). Presumably the client "remember me" links are meaningless by themselves.
I'm one of those who have been trying to do so. I created an open source approach called Handshake.js that is re-usable for developers. 
I presented this topic to a good crowd at JS.LA .
At the current time, I'm finding developers still hesitant to jump into the approach. Passwords are familiar and there are many developer tools/libraries to quickly setup the defacto username/password approach to authentication.
 https://sendgrid.com/blog/lets-deprecate-password-email-auth... https://vimeo.com/90883185
It uses the same idea as in the post, ie the "lost password flow for login", but with XMPP. The latter gives you much higher flexibility in that it actually is thought out as a programmable protocol. You try to login, the server sends a token to any of your connected clients via a bot message, you just repeat it to the bot and you're then granted access.
I feel there is high potential here, and there even is an official XEP (http://www.xmpp.org/extensions/xep-0070.html) for this.
It works very well for our purposes. We don't need crazy security because we store no important personal information -- just product preferences. It's insanely easy on the users.
I guess I should get back to blogging about my entrepreneurial lessons learned, as this has been one of many of them....
They're doing so for the nth time, and on the (or a) device they usually use, and thus their browser (or other password manager) has already got their password remembered and thus it is pre-filled in.
Having to click back and forth between email every time you log in seems way clunky relative to that, which for me is something above 90% of the instances I log in to some web application.
Couple that smoothness with picking a non-reused, strong password for a web application (which password managers make actually practical) and the friction in the user login experience seems to have little if any upside.
I go to a site and my intention is to stay on that site throughout whatever I'm doing there. If you force me off your site for something like logging in (where it's the point of 'I trust your site, give me access') then I've lost focus and you've put your experience in someone else's hands.
If I was doing this, I'd have to open a new tab, go to GMail, wait for it to load, find the tab within GMail that has the email and then click the link. Every so often, I'd probably have to put my Google password in too. That's a lot of effort, considering that your site probably isn't that significant to me.
It really feels like they want to solve my password storage problem for me, in a very opinionated manner without any alternative for me, and while it might be a good solution, it does not feel like one (for me).
Actually, the "lost password" flow already assumes email as a single point of failure, so I suppose my 2FA comment is moot (in other words, we should be pushing for 2FA for accounts regardless of their password approach on other accounts).
The linked more technical description suggests that the latter is done (sessions on trusted devices are valid for 1 year), so you apparently cannot stop someone with a stolen device from accessing your account (while the session is active / the cookie persists).
I was initially concerned that email is insecure, but then I remember sites already use email for password reset. :) My bank does something similar by also remembering my personal computers by browser fingerprinting and/or IP address.
What i do is i keep all worthy sites'password on the password manager and the rest of the site follow a pattern based password or a common simple password. For e.g. i typically use some this "keepass"or domain of visited web + keepass
If I steal someone's phone, I get access to any system using this.
If I buy a sim card off someone, or buy used sim cards, I could also gain access to some potentially high value targets if they use this.
If I 'borrow' my phone of someone, I can steal things from them if the sites using this have value.
Email delivery problems is a factor that needs to be considered though.
Stolen "remember me" cookies is another factor... The password stealing malware will start harvesting those cookies instead of passwords (it's already happening in some cases).
In the past I had thought it would be great if instead of email it could push to your phone so a message would pop up saying "confirm login? Yes/no". It would be a really simple option from a ux perspective but screw going anywhere near making the crypto tech to support it.
Passwords must die. We need to get to the point where there is a modular mechanism for authentication so that individual devs never are tempted to create a users table and add a note field for password storage.
Key loggers anyone?
Here's another take on how to get rid of passwords: The Password Manifesto
Problem with this system: it requires input on a single device (the computer), removes an out-of-band authentication option (password), and pushes the requirement for authentication down the stack to a system without a secure connection (e-mail, SMS).
In this new system, the password becomes an optional secondary authenticator. Your primary authenticator is now moved to some pre-authenticated service, such as your e-mail (which you have already logged into, presumably with a password) or SMS messaging (which you have already logged into, presumably with a swipe or pin on your phone). On top of that, neither of these uses a secured connection, so MITM/interception are trivial, to say nothing of phishing+CSRF.
One of the major flaws with existing out-of-band authentication access is that we assume the user only has one form of input: the computer. If the computer gets hacked [via malware] the user cannot protect themselves. But the future is here, and we carry [networked] computers in our pockets! Turns out the most secure way we can authenticate is via two separate networks and two separate computers using two secured connections. Example:
Step 1. User requests to login to HTTPS Site A. Step 2a. Site A prompts User for a password. Step 2b. Site A sends an SMS to User with an HTTPS link to click. Step 3a. User enters password on site. Step 3b. User clicks link on SMS in mobile device. Step 4a. Site authenticates password. Step 4b. Mobile site reads cookie on User's mobile browser. Step 5. User is authenticated; both mobile device and computer have access to site.
It would be trivial to add this dual-auth method to their existing system, so hopefully they implement that instead of throwing the baby out with the bath water.
For what it's worth, of course, here's an attack that could compromise that system. Attacker shows user a phishing password prompt page that uses a browser hack to steal a cookie. Attacker sends an SMS to the user (assuming they found the user's phone number) with a link to another site which also exploits the mobile browser to steal that cookie. Now the attacker has the two cookies and can log in, assuming the site does not use an Authentication Manager that detects attackers who steal credentials (only large-dollar sites use these).
You can Ctrl+F arbitraly big text files for keywords. Good luck with doing the same with 2-hours-long video file or mp3. You will need to listen to the whole thing. That's what annoys me about the new trend to do tutorials as video.
You can easily diff text.
Text works with version control systems.
Text works with unix command line tools.
You can trivialy paste relevant fragments on wiki pages, in emails or IM discussions.
Google translate works with text.
Screen readers work with text.
On the other hand, there are things that pictures can convey in ways that plain text couldn't approximate.
To link to a famous example: http://en.wikipedia.org/wiki/File:Minard.png
Just looking at this, in half a minute or so, you get a pretty good idea of the quantities involved, how they evolved over time, how they are linked together, etc. Conveying the same information with pure text would be much more lengthy.
I'm not going to make an entire case for this right here - just read Edward Tufte's books if you aren't too familiar with those ideas.
I feel shy to admit it, especially here, but I dislike text. I dislike it because it's unnatural. I view it as a hack that was adopted to help communicate ideas through time and space. It's a cool hack, but still unnatural. It requires huge amounts of training to participate, and it has other issues.
I dislike it at a more fundamental level because it tends to leave ideas 'set in stone'. Text, like architecture, seems to have an unnatural tendency to remain unmodified through time and space. It creates dogma and worship and takes up the space where new structures could have potentially formed. It creates things like the bible and the constitution - things that morph from their original intent into an unbreakable form of reverence. Since it disconnects the 'bodies' of the reader and the author, the reader has a tendency to mistake the text as something different from the author and his ideas.
Text has a place - to store the facts of the world at a given time and place, certainly. To store ideas that can be accurately represented with discrete symbology. To transmit the ephemeral. But, I truly hope that we abandon text as a 'serious' medium for ideas in favor of video, audio, simulation, and virtual reality.
Many of us have a bias toward text because that has been how we have lived our lives, through its symbols. Text has altered our brains. But, imagine that you could relive your life without it, with other forms of communication, would you still want it?
The technology of the medium determines the best way to convey information through it. And on top of that, whatever people are used to may influence what they do in a newer medium. For example we write to imitate speech. We use books on screens and try to recreate the world of print with WISIWYG design tools.
Text may be an evolutionary winner so far, but it is by no means some ideal artifact for communicating when computers are widespread.
One disadvantage of text is its lack of expressive power. Try reading the equation shown in the article aloud. Now try giving a one hour lecture on advanced quantum mechanics without the aid of mathematical notation. We can often represent information far more concisely and accurately with a good notation than with text alone, particularly when there is some inherent underlying structure that goes beyond what we can conveniently represent with some linear sequence of a tiny set of symbols. Computers are good at that kind of thing, but we dont read Shakespeare in binary, and we certainly dont draw the Twitter icon from the article using nothing but 1s and 0s.
Another disadvantage of text is how much it relies on everyone to use the same conventions, even though in the real world they dont. Go just about anywhere in the world and you can recognise what the little pictures of a man and a women on the two doors in the restaurant mean. Replace them with M and F and youll see people who dont speak English waiting outside to see who comes out of which door. We use different languages. We use different alphabets. In technology, we use different encodings for glyphs and invent all kinds of other concepts in an attempt to standardise how we represent written text, and we still create numerous bugs and portability issues and lost-in-translation problems. Weve been using computers for half a century and change, and we still havent standardised what the end of a line looks like. Or was it the end of a paragraph?
Now, certainly the simplicity of a text format has big advantages today in terms of things like searching for data and programmatic manipulation. But how much of that is just convention and historical accident? Right now, Im typing this using an input device heavily optimised for text, because thats what my computer comes with. If I want to input some graphical notation, say an equation, my choices are probably limited to using some awkward purely textual representation (TeX notation, etc.) or some even more awkward half-text, half-mouse graphical user interface. Neither is an appealing choice, which is why it takes those of us working in mathematical disciplines forever to type up a simple note or paper today.
Technology does exist that can interpret a much wider range of symbols drawn with a stylus or other pointing device as an alternative means of input, but usually as a niche tool or a demonstration of a concept. Until we routinely build user interfaces that parse freeform input and readily turn it into whatever graphical notation was intended, a lot of us are still going to reach for a pencil and paper whenever we want to draw some quick diagram to explain an idea. But I bet a lot of us still do draw that diagram instead of speaking for another five minutes to try to explain it.
Personally, Im looking forward to the day when source control doesnt show me a bunch of crude text-based edits to my code, but instead a concise, accurate representation of what I actually changed from a semantic point of view. But to do that sort of thing, we have to have more semantic information available in the first place, instead of relying on simplistic and sometimes error-prone textual approximations.
Whenever I hear about the stories of the potential of graphical programming languages, "live" code environments living in their own VM, and graph-based logic stuff, the first thing that comes to mind is how come those systems have such a short shelf-life even when some of the concepts behind them are so brilliant.
Between the increased storage space, the interoperability issues, and the exponential difficulty in dealing with non-text media in a variety of operations, there's so much more additional friction to these systems that in the end they're not worth it. Unless, of course, they can be trivially converted to plaintext and parsed as such, then they have a fighting chance.
My reaction when reading this was, "Yeah, but that's because you encoded it in PNG. That's a 'good-enough' encoding, but you can definitely make it more efficient by making it an SVG, since that image is of the kind that's ideal for vector graphics." And then I remembered SVG is a text-based image format.
Touch, frog hop. Touch.
Adding to the point: karma system on sites such as Reddit has incentivized converting text into images, because text posts don't get karma. For example, r/quotesporn (safe for work) has many more users and quotes than r/quotes which allows only text.
As a collector of quotes, this annoys me to no end, because I can't copy/paste the quotes into my personal quotes collection.
 http://reddit.com/r/quotesporn safe for work)
Let's start with a radical position. Is something text iff it can be directly encoded in UTF-8? What, then, about symbols that have not yet made there way into Unicode? Like an i dotted with a heart. Does it become text when the Unicode Consortium says so?
Nowadays memes tend to be distributed as (animated) bitmaps. But if we wanted to, we could encode them more efficiently. So are they text?
If 'text' = Unicode then that would also mean that many mathematical expressions (matrices, fractions) are not text. Math texts before symbols were not very readable: http://www.boston.com/bostonglobe/ideas/brainiac/2014/06/bef...
ASCII-encoded math is not without problems either.
Does 'text' include semantic markup like 'emphasis', 'heading', or 'list-item'? Does it include visual markup like 'italic', 'underline', 'blue', or 'Times New Roman'?
Does 'text' include newline and tab characters? Is it correct to say that newlines and tab characters exist on paper? If they don't then why do we use them to indent blocks of code?
If a sheet of paper with scribblings can be text, then can a bitmap be text too?
Now that I've brought up mathematics, HTML, and code, should we think of text as a linear medium or is it better to think of texts as trees?
What about handritten class notes that include arrows that link together different text fragments? Are these arrows part of the text? Does that mean that texts are directed graphs?
I'm even wondering if the author might actually have meant 'always bet on language', although that seems kind of obvious.
Or perhaps he meant 'don't needlessly throw away information', which is what would be happening if your CMS served pages as HTML image maps.
That is to say, even if we're all inclined to say that text is awesome, which we probably are, we might still be saying quite different things.
I propose that this is one of the key reasons why text files are vastly superior to binary formats. While they end up very similar in normal use, the readability enables investigation and experimentation, while writing a raw struct out to a file keeps the meaning in the (possibly lost or unobtainable) original program files.
// if speed is needed, you can always cache the parsed version of the text file
"Text wins by a mile." Wins what?
"Text is everything." I don't get his point.
The opening of this post has the feel more of a fan blog about their favorite baseball team than something intellectually serious.
In the second paragraph the author says "text is the most powerful, useful, effective communication technology ever, period." I suppose this is the purpose of this post? I guess if he is saying if he could only have one form of communication, he would choose text (removing speech from the list). I guess, but is this actually a debate? Are there really different "camps" that prefer images over text or video, etc?
I could delve into the arguments presented in the article but I think first I need to understand the thesis.
EDIT: To the down-voter, do you have anything to ask or say? Or are you just down-voting because you disagree? Everyone is interpreting this article to mean "there is too heavy a bias towards multimedia on the web today" but this is not the thesis of this article as I see it. The author is making arguments that text is better than multimedia in an absolute/complete sense. This is an entirely different argument.
1) In addition to being searchable, text lends itself to other forms of automated processing such as translation and text-to-speech for the vision impaired.
2) I'm prone to debilitating eyestrain headaches when I try to do any kind of graphical work on a computer, yet I can write text without looking at the screen.
In 100 years, what the average person will be able to communicate quickly with images is likely to be unimaginable to us today.
Anyways, I'm not saying text isn't superior in many ways, just that its way to early to judge images given technical limitations.
I think writing systems like the Chinese writing system is instructive in this respect. It had roots from thousands of years ago, like western writing systems, and both were about as effective until the end of the 19th century with the linotype machine and the mid to late 20th century with 7, 9, 14 and 16 segment liquid crystal displays. Western writing systems enjoyed a big advantage from a technical perspective until only recently because they were simple enough in form to be conveyed by simpler technologies than the Chinese writing system.
If such a gulf can exist, even if only for a few decades, between two "text" systems, then it's not a stretch to see image-based systems as comparable, but requiring better technology to become a powerful as text in the sense that Graydon is talking about here.
It's one thing that a computer cannot make sense of 4000 bytes of a tweeting bird, but human brain instantly recognizes the rendered sign.
Also, videos and music are very hard to describe in text: it just does not have this unique feel video or music piece does.
Take a document written in the word processor du jour 20 years ago. It is highly unlikely you could mount the physical media, let alone import the data with high fidelity. But plain text can be read by just about any tool, whereas binary/proprietary formats are limited by the longevity of the hardware/software that created them.
It's 28K, a full text definition might be smaller but will be hardly accurate. My brain can process this single picture faster than a possible text representation of it.
Text is more practical 99% of the time but it's actually small pictures, known as letters, used together to symbolize concepts. I don't think my brain interprets letters individually, but my eyes mostly catch word by word, hence a picture.
Quite possibly, we might end up a forgotten generation, since procedures for cataloging digital memorabilia will only be invented after the lessons learned from the deaths of the first digital natives. One can only hope that archive.org will at least have an ugly copy of our blog.
I'm sure the browser industry could benefit from a open, compiled html format, it would be so fast. I still wonder why there is no such format.
It's not about filesize though, gzip does a really great job at compressing text, but it's just about making a page load faster. It's no surprise to see web browser use so much memory: html is very flexible (there's nothing better), but it's fat.
That is a problem somewhat similar to the RISC vs x86. Risc has a simpler set of instructions, is a faster processor, but executables are much much bigger, requiring more cache. x86 has a more complex set of instructions, so it's slower, but the executables are much smaller. It's a balance to find.
I wonder if you could extend battery life by using compiled html. I would love to test that kind of tech on "normal" cellphones and see if how it performs.
There is a whole universe of things phenomena text it's sub-optimal for. Experiences being one of them.
True, and very interesting to consider.
OTOH, if you can simulate a person who knows how to express themselves speaking to you personally, there's and even technology you tap into. That's a technology we have actually adapted to biologically.
Everything else is just hijacking faculties designed to allow your uncle to explain to you how to make rope from bark.
I don't really know but text is probably not the oldest...
If the point of the article was to trick people into clicking, then it succeeded in that I guess.
Looking at OPs article, if my purpose is to explain human rights, text is staggeringly more efficient in labor of creation, not just time spent interpreting it or storing it or searching it.
An artists life work might produce a painting that conveys the entire meaning of the definition from the article of human rights. Maybe. I bet that would be an amazing painting and I'd enjoy viewing it. But ... aside from high art, can we afford general commerce in an artistic style? Is it affordable for society to create an interpretive dance implementation of my mortgage statement and is that a wise use of limited artistic skill and labor?
Its possible to create deeply meaningful works of art, at staggering expense of materials and labor both creation and interpretation and storage and archiving. That doesn't mean that most human creations (my water bill, the instructions for my TV, the receipt Amazon included with my $4 HDMI cable) are worthy of artistic labor.
If a graphics artist or painter is any good, I don't want that artist to waste time on my electric bill, I'd much rather have the fruits of their labor hanging on a wall in a frame. If they're not any good, I don't want them screwing up my electric bill making it incomprehensible.
After 5 comments I couldn't post for -1.5- 3 hours.That's fucking retarded.And your fucking emotional downvote shit.Worst fucking site for discussion ever.
I thought I could come back but after being involved in more free communities this site feels like a fucking prison where you get beaten and put in a quarantine cell for saying nigger or jew or fucking anything that might offend someone's fucking ass.
Fuck you Fapper Jews.
I hate that feeling.