hacker news with inline top comments    .. more ..    5 Jan 2014 Best
home   ask   best   5 years ago   
1
What I Didn't Say paulgraham.com
1231 points by twampss  5 days ago   567 comments top
1
grellas 5 days ago  replies      
A word about credibility. It comes from the Latin word credo, meaning "I trust." Its value exceeds that of money because it marks you as a person - as someone who is respected, who is trustworthy, and whom you would want to count as a friend. It marks you not as perfect but as special. It makes others ponder not so much that they did the last deal with you but that they would want to do the next deal too. Just as we build credit through many transactions, so we build credibility by the very pattern of our lives. Credit and credibility derive from the same root and signify the same thing: when in doubt, we can trust the one who has either trait. Not blind trust, just a benefit-of-the-doubt level of trust.

Well, pg has earned our trust and deserved the benefit of the doubt when something so off kilter as this is attributed to him. He did not get it here, and that is a sad testament to how crowd-inspired frenzies can bend our perceptions in such faulty ways. Let us only hope that we can learn some good lessons from this.

pg's response is actually priceless: it is like a soft-spoken witness upending a bullying lawyer who had just viciously attacked him, leaving the attacker reeling for all to see. Indeed, the mob looks pretty much like an ass at this point and kudos to pg for his more-than-able defense. Very lawyer-like, in a way, but far more classy.

2
Edward Snowden, Whistle-Blower nytimes.com
771 points by Anechoic  2 days ago   157 comments top 4
1
pvnick 2 days ago 3 replies      
That was such a refreshing article. I've been saying it for a while now, I'm hopeful that we're going to see some very positive reforms in 2014 or 2015, as well as an eventual hero's welcome for Snowden. It takes a while for such a massive shift in public opinion, but it's inevitable. The reason it's taking so long is just a knowledge gab with the people that aren't as well-informed and don't know the magnitude of the abuses. As people learn the full scope of what's been revealed they tend to be (for the most part) outraged. I look forward to a couple decades from now, when I can tell my kids about how us folks who were paying attention were all vindicated when the NSA reforms were enacted and Snowden was given a full pardon.
2
r0h1n 2 days ago 3 replies      
Absolutely! I especially loved this part:

>> "His leaks revealed that James Clapper Jr., the director of national intelligence, lied to Congress when testifying in March that the N.S.A. was not collecting data on millions of Americans. (There has been no discussion of punishment for that lie.)"

3
umanwizard 2 days ago 5 replies      
I find it pleasantly surprising -- almost unbelievable, in fact -- that a highly sought-after fugitive accused of treason and practically certain to be found guilty of serious crimes is so widely supported by the public and the media.

Has there ever been another person whom the executive has done everything in its power to paint as a dangerous enemy of the state, whose approval rating was several points higher than the President's and several times higher than that of Congress? Or is this a never-before-seen situation?

The inverted totalitarianism[1] we live in can seem almost invincible, but this to me is a big glimmer of hope that some people at least are still unwilling to swallow the (two-)party line.

I hope this leads to some real change, but then again, I can't exactly hold my breath.

[1]: http://en.wikipedia.org/wiki/Inverted_totalitarianism

4
ajays 2 days ago  replies      
On the surface, I welcome this editorial. About time.

But the NYT has deep connections to the USG, so I'm wondering where this editorial is coming from. It could be a trial balloon on the part of the administration to test the public's appetite for a reduced sentence for Snowden.

3
Fired? Speak No Evil nytimes.com
680 points by uptown  1 day ago   346 comments top 3
1
grellas 1 day ago 4 replies      
Non-disparagement clauses can be seen as a throwaway item, a suffocating burden, an essential protection, or a damned nuisance, each according to taste or context.

To begin with, lawyers tend to see these clauses as essential protections and they are sometimes right. But, right or wrong, they tend to insist upon them, especially in the employment context. This explains their prevalence but, of course, does not necessarily justify their use.

Just to illustrate the cases where they truly are an essential protection, you and a competitor have been fighting for years in court over ugly and untrue things that someone has said about you or your company - non-trivial things that have really hurt you. When it comes time to settle that case, a continuing non-disparagement obligation will be not only helpful but essential to the resolution. The same is true in many other legal fights. When emotions have run high, and parties have antipathy toward one another, it is good practice to help ensure the peace after their fight has been settled to require that they not speak badly of one another and to give a simple mechanism such as binding arbitration to help resolve any follow-on dispute over whether they have done so or not. In such cases, there are excellent reasons to bind parties contractually to restraints on their ability to speak where they would normally be free to do so.

The employment context gets trickier because the antecedent acrimony that characterizes a legal fight may well not be present at the time of a termination and the question then arises: why am I being artificially muzzled? And there is a point to this: why be barred from speaking truthfully about a former situation even if it might be negative? why be at risk of a harassing lawsuit over what it means that something "may" reflect "negatively" about someone? why, in an age of easy communication through social media, be made to feel you cannot even speak about something that may have been a major part of your life, perhaps for many years? What may be seen as a throwaway item by some can be felt to be suffocating by others, and all the more so because it is tacked onto a token severance that gives you very little in exchange.

That said, I would say that the overwhelming number of employers and employees alike see these simply as throwaway items. They figure no one will care about such clauses except the lawyers. And, in most cases, they are probably right. The question then becomes whether one should not sign as a matter of principle or whether to just sign and take the money. Most employees take the money.

Of course, employees can push back if they have leverage. No one is obligated by law to sign a separation agreement. If the terms aren't right, and can be made right, then push back. Insist that the token severance be made more substantial. Or that non-disparagement, if it is to be included at all, be made mutual (it can be quite a head-ache for a large employer to keep control of its many people to ensure that none speak badly of you). Or insist that it be narrowed or clarified so as to reduce or eliminate vagueness about what may or may not be deemed disparaging. Or insist that it be coupled with other considerations that give you benefits apart from your normal final pay, etc. This sort of negotiation can make these clauses a big nuisance from the employer standpoint and may cause the employer simply to drop the clause. However, all of this assumes employee leverage, which doesn't often exist in the routine case, and so, as noted above, most employees simply take the money, accept the restriction, and don't bother to look back.

And so it all depends. For the author of this piece, this was a critical issue. For many others, it is not. Context is critical. And for all but trivial cases, do check with a good lawyer to understand the implications of what you are signing. If the risks are real, there is nothing worse that a harassing lawsuit from a former employer angry with you over some statement you made out of emotion. This is what gives these clauses a bad name and it is also what can make them dangerous. In such cases, be cautious about exposing yourself to such risks in exchange for some token severance. It is probably not worth it.

2
danielweber 1 day ago 10 replies      
And I was soon informed that the president wished to assure me that there is nothing unusual about such clauses

Whoop whoop whoop! This sets off giant alarm bells in my head.

It might be totally normal. That doesn't mean you should sign it.

It's also an older-than-dirt salesman tactic to say that something you just made up is "totally common."

Of course, the company can attach whatever clauses it wants to a separation agreement. You aren't entitled to a severance payment. I'll tell other engineers that two weeks' salary is a piddly amount for the company for you to surrender such rights. You can just walk away. They are the ones who want you to sign that.

(It's kind of ironic, but after you have been given notice you are fired, you have power. They want you to do certain things, and what are they going to do? Fire you? Already did that. Withhold pay? Illegal.)

3
geebee 1 day ago  replies      
I have serious misgivings about this sort of "hush money". In general, I'd prefer not to interfere in private contracts, but this one has such serious implications for everyone else. In particular, it can end up creating an information imbalance, enforced by the courts, that allows a certain group of people to remain "in the know", with everyone else unaware of what is going on.

I read a while back about a law firm that had evidently done something very dodgy - representing an inventor and the firm purchasing the invention at the same time. The engineers were eventually paid a settlement, but part of the settlement was a gag order - nobody was allowed to talk about what had happened or the amount of money paid. This included, of course, the press.

Now, what do you want to bet that well connected lawyers, upper managers, and so forth, are able to access the terms of this deal - even if they weren't involved? What are the odds that an inventor who approaches a law firm will know what transpired and why? The imbalance of information will put the inventor at an overwhelming disadvantage.

My gut feeling is that there is a third party in all of this - me. Well, me, and all the little people. I understand the need to enforce contracts within reason, but I'm having a tough time seeing my own personal interest, or the general public interest, in enforcing these "stay quiet" contracts.

I'd also point out that this isn't really a situation where we are prying into a private transaction and forcing people to talk. Our courts are actually enforcing the gag rule that keeps most of us in the dark about what is really going on out there. My misgivings about regulating private transactions aren't as strong when all we'd need to do is stop enforcing contracts that are clearly against the public interest [1].

[1] I am still thinking this through. I'm not absolutely sure this is against the public interest, or, even if it is, if we the courts should refuse to enforce the provision. It's how I'm leaning, but I have a sense that there may be more to this. I am generally glad that courts won't enforce certain terms of contracts, such as very long non-compete clauses and the like...

4
Toshiba says they made a mistake but they still cannot help me site44.com
656 points by barhum  1 day ago   190 comments top 8
1
sergiotapia 1 day ago 5 replies      
You probably aren't familiar with how "guarantees" work here in South America, ugh.

See companies like Samsung and Toshiba have "certified" stores that "take guarantees" but they are not tied by their parent company, they are privately owned stores that just negotiated with the parent company to use their "sticker".

I bought a Phillips shaver and under warranty, the Phillips station wanted me to pay 70% of the cost of a new one, despite being a DoA device.

So while the sticker works as it should in the US and Europe, South America has a god damn wild west scenario. Anything goes, and if you don't like it, buy something else. Yep.

(Source: I live in Bolivia)

3
anigbrowl 1 day ago 3 replies      
Manuel Diaz is the head of Toshiba Sales & Marketing for Latin America: https://plus.google.com/105717227635873644097/about http://www.linkedin.com/pub/manuel-diaz/4/862/644

Make a nice polite blog post with all of your documentation (including your sales receipt) and then send the link to him.

4
x0054 1 day ago 1 reply      
I purchased Toshiba laptop in 2002. Within 3 months the laptop's graphics card failed. Toshiba does not repair their own laptops, rather they send it out to some 3rd party repair center. The repair center took 3 weeks to repairer the laptop. When I came back to pick it up, the laptop started but the screen turned off as soon as I picked it up from the counter. I left it with the relier center. 2 weeks later they called again. This time it worked for a day before dying again. 3rd time they took another 3 weeks to repair. After that it worked for a month and died. I gave up and got a new laptop. Since then I never purchase Toshiba. I don't care how good or bad their products are, their customer service is one of the worst.
5
devindotcom 1 day ago 1 reply      
I'm probably with you, but there's not a lot of information here. Where did you buy it? Could it have been from a dealer that wasn't authorized to issue this warranty? If they couldn't agree to it on Toshiba's behalf the contract would be null, right? And what is the problem with the laptop - though that is of course a separate question from that of honoring the warranty.
6
davidw 1 day ago 2 replies      
I've had good luck with Dells from that point of view. I've bought 3 in the US, and two, at some point in their lifetimes, have needed some love from a technician (bad HD, and a cosmetic problem with a very new laptop that I wanted fixed because I spent quite a bit on it). Despite being very much not in the US anymore, they promptly dispatched people on site (in Innsbruck, Austria, and Padua, Italy) to fix the problems with no questions.

(Edit: by the way, most recent one was one of these - nice dev machine if you like Linux! http://www.dell.com/us/business/p/xps-13-linux/pd )

7
etler 54 minutes ago 0 replies      
I've heard plenty of bad things about Toshiba customer support last time I was looking for a laptop. It's not too hard to find complaints about them.
8
cabinguy 1 day ago  replies      
I'm sorry to hear that you are having this problem. After buying/selling 10,000+ used laptops (every brand imaginable) over many years, I personally purchase and recommend only Toshiba laptops. Take it for what it's worth.
5
About Python 3 alexgaynor.net
542 points by jnoller  5 days ago   344 comments top 4
1
thatthatis 5 days ago 5 replies      
I'm going to go against the grain here and say that moving slowly is one of my absolute favorite features about python and its libraries.

Rails and django were released about the same time, rails is on version 4, django is on 1.6.

Moving slowly means I can spend more of my time writing code and less of my time upgrading old code. More importantly, every release requires a perusal: did the API change, what's new, are there breaking changes I need to be aware of?

I didn't appreciate how nice a slow but consistent and deliberate release cycle was until I started using Ember which seems to release a new version monthly.

Its generally acceptable to be one or two x.x versions back, but much more than that and the cost of maintaining libraries skyrockets, so you start losing bug fixes and library compatibility.

With python there's not really a question of if I can run my code for a year between non-security upgrades, even with a few dozen third party libraries. That stability is immensely valuable.

2
agentultra 5 days ago 4 replies      
I like Python 3. I prefer it. It is better to program in than 2.x. Iterators everywhere, no more unicode/encoding vagueness, sub-generators and more. It is a much better language and it's hard to see how it could have evolved without a clean break from its roots.

However it has been interesting to follow over the last five years. It has been a sort of, "what if p5 broke the CPAN," scenario played out in real-life. Breaking compatibility with your greatest resource has a painful trade-off: users.

Everything I work on is not even considering a migration to Python 3. OpenStack? A tonne of Django applications that depend on Python 2-only libraries? A slew of automation, monitoring and system administration code that hasn't been touched since it was written? Enterprise customers who run on CentOS in highly restrictive environments? A migration to Python 3 is unfathomable.

However my workstation's primary Python is 3. All of my personal stuff is written in 3. I try to make everything I contribute to Python 3 compatible. I've been doing that for a long time. Still no hope that I will be working on Python 3 at my day job.

Sad state of affairs and a cautionary tale: "Never break the CPAN."

3
evmar 5 days ago 2 replies      
I like to think of engineering as "solving problems within a system of constraints". In the physical world, engineering constraints are things like the amount of load a beam will bear. One of the primary easily-overlooked constraints in the software world is backwards compatibility or migration paths.

There are many examples of systems where many look at them today and say: "This is terrible, I could design a better/less-complicated system with the same functionality in a day". Some examples of this dear to my heart are HTML, OpenID, and SPDY. It's important to recognize the reason these systems succeeded is they sacrificed features, good ideas, and sometimes even making sense to provide the most critical piece: compatibility with the existing world or a migration plan that works piecemeal. Because without such a story, even the most perfect jewel is difficult to adopt.

The OP, about Python 3, is right on except for when it claims making Python 3 parallel installable with 2 was a mistake; doing that would make it even more impossible to migrate to 3 (unless the single binary was able to execute Python 2 code). (Also related: how Arch screwed up Python 3 even more: https://groups.google.com/d/topic/arch-linux/qWr1HHkv83U/dis... )

4
themgt 5 days ago  replies      
It's fascinating to compare this with ruby 1.9, released around the same time, but seemingly with a slightly better cost/benefit ratio, having nice new features and also significantly improved performance, and with ruby 1.8 being deprecated with a lot more speed and force. It got everyone to actually make the switch, and then ruby 2.0 came along, largely compatible and with a more improvements, and now ruby 2.1 seems to be an even smoother upgrade from 2.0.

The ability of the ruby core team to manage not just the technical aspect of making the language better, but smooth the transition in a way that actually succeeded in bringing the community (and even alternate ruby implementations) along with them, hasn't been given nearly enough credit. You could analogize it to Apple with OS 9 -> OS 10.9, versus Microsoft with people still running XP

6
Were About to Lose Net Neutrality wired.com
527 points by joseflavio  6 days ago   262 comments top
1
pvnick 6 days ago  replies      
I consider everybody here very smart. In many cases smarter than myself. Therefore, could somebody please explain why we would give the government, which has shown itself to be terribly incompetent with technology issues, the ability to enforce net neutrality? Seriously, I can't get over the dissonance here. If it's such a shitty idea, let consumers decide. Google Fiber et al will just eat the major telecoms' lunch sooner or later anyway. It may just take a little longer, but we'll avoid the possibility of letting the government crush Internet innovation forever.
7
Backdoor found in Linksys, Netgear Routers github.com
526 points by nilsjuenemann  3 days ago   137 comments top 14
1
maxk42 2 days ago 10 replies      
About a year ago I left a cable modem and internet service (Time Warner) at an apartment I was moving out of while my friend continued to stay there. I had configured the thing in a manner I thought to be fairly secure -- strong password, no broadcast, etc.. One day the internet goes down and my friend doesn't know what to do. She calls the ISP and asks them what's wrong. They say they can't release any information about the service to her without my permission, so I suddenly get a three-way call explaining that my friend and the ISP representative are on the line and I need to give my authorization to access the account information. Being the person I am, I attempt to troubleshoot things over the phone before giving out any sort of account credentials. Eventually, I ask her to log into the router configuration page. She doesn't know the password and the first one I gave her doesn't work. The representative chimes in "That's fine -- I can just change it from here."

"...What?"

I was furious. Time Warner had left a backdoor in all their modems that gives them administrative access to my private connection. And yes -- she did alter the password remotely. She didn't seem to think there was anything wrong with this. I tried googling for relevant information, but wasn't able to find anything more than speculation at the time.

2
earlz 3 days ago 3 replies      
Interesting. Reminds me of the hack I did on a (mandatory) modem/router forced on AT&T users. They had a bunch of problems with it, so one day I got fed up after the millionth disconnect and cracked it open. Got a serial root shell by using the "magic !" command (completely randomly discovered) and dumped the source to the web UI(in Lua/haserl). From there found the equivalent of a SQL injection vulnerability and used it to gain a remote root exploit.

Most annoyingly, AT&T put out a firmware update some months later that closed the exploit, but didn't fix any other problems. So, I found another more intrusive/permanent exploit. Still waiting on them to patch it next heh. But now they are actually putting out some updates that actually fix problems too at least. Hopefully user uproar will continue to drive them to fix more problems

3
midas007 2 days ago 2 replies      
This is not surprising. It's a calculated risk to make a product just good enough. Development resources invested in retail wireless gear is minimal. I've worked on firmware for high-confidence industrial wireless gear used in mines. Most of them fall over under load, run obsolete+unpatched code and/or reboot randomly. Retail customers will tend to just put up with it and not return the product before the merchant's return grace period.

It's a totally different attitude when the intended market is enterprise: it's assumed that if a product causes a failure, the vendor is going to receive escalating, unpleasant phone calls until it's resolved.

4
nlvd 3 days ago 1 reply      
"And the Chinese have probably known about this back door since 2008." http://www.microsofttranslator.com/bv.aspx?from=&to=en&a=htt...

That's a pretty scary prospect. If its been 'known' and exploited since at least 2008. Poor form Netgear/Linksys.

5
dbbolton 2 days ago 4 replies      
Has there been a technical write-up on this yet? I honestly tried to read the presentation and had to quit after the third superfluous meme slide.
6
nwh 2 days ago 1 reply      
I have confirmed this (or something similar) is present in the Netgear DG834N as well.
7
elwell 3 days ago 3 replies      
TIL: Some people know a lot more than me about hacking. That PDF was interesting, but I only understood a small fraction of it.
8
salient 3 days ago 3 replies      
Can this be fixed by changing the firmware to OpenWRT or DD-WRT?
9
redx00 2 days ago 1 reply      
Has anyone ever tried submitting a GPL request to http://support.linksys.com/en-us/gplcodecenter

I wonder if there is anyone still working in the GPL compliance department.

10
atmosx 2 days ago 0 replies      
I live in Czech Republic and my Zyxel from O2 has port 7547 open (Allegro RomPager 4.07) and you can't do anything about it. There is no editor on the installed linux version (cropped down linux, probably openWRT or something similar), no package manager no nothing.

If I flash the firmware warranty is void and I have no user/pass to re-enable the ADSL. So basically, my router is a hostile AP.

Given the fact that, it's a common pattern among ISPs in order to offer quick service - I firmly believe that ISPs do it for practical reasons - and end up killing your security, the best thing is to put the router in bridged mode and get a cheap custom-made router like carambola2[1] and install FreeBSD[2] on it.

Disclosure: I donated one of these devices to Adrian Chadd[3] in order for him to port FreeBSD on this device, which enabled me to use PF[4] - my favorite firewall - but I have no affiliation otherwise with 8devices or FreeBSD.

[1] http://8devices.com/carambola-2

[2] https://wiki.freebsd.org/FreeBSD/mips/Carambola2

[3] https://wiki.freebsd.org/AdrianChadd

[4] http://pf4freebsd.love2party.net

11
dobbsbob 3 days ago 2 replies      
Buy a $200 soekris box and install openbsd or m0n0wall on it, or on any old pc you have lying around with 2 network cards.
12
billpg 3 days ago 1 reply      
I've used GRC's "Shields Up" and asked for a user-specified probe for port 32764 and it came back "Stealth".

Assuming GRC isn't out to decive me, can I assume that my router is fine?

Bill, using a Netgear router.

13
m86 2 days ago 2 replies      
ScMM = SerComm, perhaps?

Many of Linksys' old DSL modems were manufactured by them, AFAIK.. and it seems many of the noted 'probably affected' models have a SerComm manuf'ed device for at least one revision of that model line

More probable SerComm manuf'ed devices are visible at the WD query link below..

http://wikidevi.com/w/index.php?title=Special%3AAsk&q=[[Manu...

14
jacob019 3 days ago  replies      
is this backdoor only served up on the wlan or is it also exposed to the internet?
8
Vacations are for the weak sethbannon.com
428 points by sethbannon  2 days ago   350 comments top 2
1
equalarrow 2 days ago 10 replies      
I think the big takeaway here is a lot of companies don't care, these are the rules and you accept them or not. Most people accept them - they feel like they have to. I've done it quite a few times.

But what we really need is more self-realization like this at the top. This is where the change for these sorts of policies can happen. On one hand, it's really sad that we're so work obsessed here - money is more important than people so much of the time. But, on the other hand, there is still room and freedom to make your own way and write your own rules.

I think about this topic a lot, especially because I am fighting burnout myself. I didn't do any work for most of xmas break and when I went back this week I kept thinking "I need another few months off". I even had to push for the two days after xmas off - there was a little push back from the ceo since we're a small 4 person company. But, I'm a little older than everyone and I was thinking "fuck it, I need to chill".

Ideally, my dream job is to just work for myself (I'm sure that's everyone else's too). Sure, there are tradeoffs with that, but there's something about working when you want, where you want. I think there can definitely be a balance between being on vacation a lot and outsourcing all the non-important work to other people who will do it. Time Ferris is a great read for this type of living and it exemplifies the work to live not live to work thing (or work as little as possible and really live).

I did a vacation a few years ago where my wife and mother in-law went to Spain. It was awesome, my first time there, but I was 'pressured' by work to keep crankin on our app. It was such bullshit and I was really pissed about that pressure - vacation was not vacation. I told myself: 1) I'd never work for someone else on vaca again - ever, and 2) I'd never make anyone else do that. Needless to say after a few days I was like, 'fuck this, this is the stupidest thing in the world. I'm in one of the best food and historical places in the world and the guys at home want me to keep coding. Bullshit.'

I think there's a point where you either keep going with everyone else's rules or you make your own. Get busy livin or get busy dyin. I'm at the point in my life where I'm getting to the last of dyin for someone else's deal and starting to live for my own. It's not impossible, just take discipline and focus. Otherwise, me, and everyone else is working under someone else's thumb, by their rules, working on vacation. Dumb.

2
glesica 2 days ago  replies      
It is unfortunate that so many people are stuck in jobs that don't offer sufficient time off. It seems fairly common for employees to "start"* with just two weeks of paid time off (and many, I think, even lack access to unpaid leave for non-medical reasons).

One might argue that two weeks is sufficient for a week-long vacation every six months, but most people I know use most of their time off for family obligations and "work outside of work" like home repairs. This is a crap situation.

* Quotes because it is becoming less and less common to work for a company for decades, so the traditional system of awarding vacation based on length of tenure is becoming more and more insane. How many people never even get past the initial level of paid time off before switching jobs?

9
If you move your mouse continually the query may not fail. Do not stop moving microsoft.com
419 points by shawndumas  12 hours ago   151 comments top 9
1
zbowling 11 hours ago 1 reply      
I had this bug once about 10ish years ago when I was still a windows dev. Creating a 2nd window on a thread with a window already but then pumping it's main loop on a background thread can cause this. If you do that the loops get hooked to each other regardless of the thread pumping it. The child window has to wait until the parent window forwards the event for the second thread to pop it off. A mouse move will send a WM event and keep the child windows loop on the other thread spinning. My WM_TIMER on the child window was stuck as well.

It happened to me because I had a hidden window on the background thread to get WM events for USB disconnect and reconnect messages from the system. The bug report was funny. "Connecting USB data collection probe doesn't work unless the user moves the mouse after connecting." and the follow up bug "Data collection only works when moving the mouse."

2
frik 9 hours ago 2 replies      
Excel codebase is very old, most is still in recent versions.

Excel 2010 still relies on WinAPI's Fibers [1] (lightweight threads). In recent years the idea got popular again with Lua's co-routines and GO's goroutines.

So it is manually scheduled by the application, instead of relying on the OS.

In 2014, there is a lot of code in Excel that dates back to Excel 3 (1990). Excel 2010 still relied on the outdated MDI concept [2] that had been introduced with Win 3 and only one Excel instance/main window can run.

One can embed the Excel OLE component in one app [3]. As soon as the component get the focus it replaces the traditional menu bar with the custom drawn "ribbon bar". It looks weird out of place (Win9x style OLE app with ribbon bar). Office apps started with custom drawn UI objects with Office 97. It had these fancy toolbars and a italic window title [4] instead of the boring Win95 look that WinAPI provided.

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/ms68... ; http://msdn.microsoft.com/en-us/library/windows/desktop/ms68...

[2] http://en.wikipedia.org/wiki/Multiple_document_interface

[3] e.g. using the sample apps that come "Inside OLE 2nd" book.

[4] http://www.cheresources.com/economics.shtml

3
ck2 11 hours ago 4 replies      
Can you imagine how crazy you'd think the customer was if you were giving technical support and they insisted they had to do this?
4
elliottcarlson 9 hours ago 5 replies      
Whenever I am waiting for something to happen, I tend to move my mouse pointer around in circles on the screen - this always caught peoples attention and I've been asked plenty of times over the years if there was a reason for it. I always joked that it helped make things go faster by making the computer know I was still there and waiting - I guess I wasn't lying.
5
ballard 6 minutes ago 0 replies      
iOS has a similiar problem, especially when setting up notification-based APIs and expecting a notification within a certain period of time (ie a user needs their location now to complete an action, say a social post with location). Sleeping won't work, nor will waiting on another thread work. What works is looping until timeout by sleeping for a few ms and then running the main thread's run loop. Gross but it works. Otherwise, the location notifications never arrive. Such is the downside of cooperative multitasking.
6
kyberias 10 hours ago 4 replies      
This is a 11 year old knowledge base article for a bug in a product (Excel 97) that saw the light in 1997, that is 17 years ago. Yes, the second workaround is kind of hilarious, but let's not draw too many far-fetching conclusions from it. I believe many of us have seen crazier bugs.
7
runjake 8 hours ago 1 reply      
An aside of historical trivia: On an Amiga 1000, you used to be able to wiggle the mouse "too fast" and cause the machine to crash with its infamous "GURU MEDITATION ERROR"[1].

1. http://simhq.com/forum/files/usergals/2013/02/full-4656-5091...

8
LinaLauneBaer 11 hours ago 2 replies      
This reminds me a bit of how networking works on OS X and iOS (at least if you are using the "standard APIs" the way they are supposed to be used: The networking that is going on is tied to a runloop. Some apps tie the networking stuff to the main run loop. You can see which apps do that by simply opening a context menu in the app somewhere or open a regular menu from the app's main menu. The run loops will be "halted" for as long the menu is open and thus your networking will stop. As soon you dismiss the menu the networking will continue.

At first this may seem totally bullshitty: Imagine a download manager. Do you really want the download to pause every time you open a context menu? Well it turns out it is not such a bad idea in many cases: What if the context menu allows you to cancel the download? If the download were to go on in the background you would have to explicitly take care of that. If you tie the networking callbacks to your main runloop this simply can't happen.

Of course there are also a lot of use cases where you want your networking code to not have anything to do with your main runloop...

9
tszming 11 hours ago  replies      
>> This problem has been reported when querying an ORACLE 7.3 data source by using the following ODBC drivers. Sqo32_73.dll is manufactured by Oracle Corporation. Microsoft makes no warranty, implied or otherwise, regarding this product's performance or reliability.

Maybe not the fault of Microsoft?

10
Evernote, the bug-ridden elephant jasonkincaid.net
394 points by ssclafani  22 hours ago   238 comments top 10
1
bdwalter 18 hours ago 16 replies      
I have been a loyal Evernote premium payer since 2009, and using it even longer. For a long time I recommended it to friends but since have stopped. I have developed some concerns with it over the years.

1. Fear of data loss... it's probably the largest part of my mistrust of all these dang cloud services that want to control/own me or otherwise lock me into their service. I run a 99.999% uptime, extreme scale, SaaS business across multiple active/active data centers, I know exactly what it takes. Its incredibly hard to do, and I don't trust anyone at a rapid growth company to do it right. In the ever constant scheduling battle between features and doing it right, features frequently end up winning, especially in consumer focused SaaS business with meaningless SLAs. My Evernote library is clearly much much more important to me than it will ever be to Evernote. No amount of marketing spin will ever lead me to believe otherwise. This really is my own hangup though. At some point I may just get over this. To their credit, I have only ever lost a few notes in the 4-5 years I have been using the service.

2. Tight lock-in (the cynic in me always says its clearly engineered this way) to the platform, frustrating process to export my notes to another tool. This is a problem across the industry. Everyone playing the lock-in/stickiness game. Portability is key. A simple text export of my notes would go a long way to make me happy. I really dont want html exports of my text notes.

3. Security of their cloud service...frankly, I don't trust anyone and wish I could store my evernote data on my own; self managed; self encrypted; shared storage platform. 2 factor was a nice step in the right direction. Self managed encryption keys is when I will stop whining about it I understand this makes a lot of things hard, and am willing to forgo some features to get this feature.

4. Lack of reasonable support for Linux. Evernote is now the single sole tool keeping me from dumping my Mac and moving to linux... Yup, note-taking is that important to me. I have tried nevernote, everpad and the like, but they are still pretty weak. I understand this isnt Evernotes problem and Linux is a very small market, but its a big deal to me.

5. A frankly lousy text editor. Seriously, I keep expecting this to get better, and it just never quite gets there. And don't get me started on tabs and indenting. I often edit notes in mac textedit and then copy them into evernote. Not because TextEdit is great, but because its predictable and just works. I'm not looking for advanced features here.

6. Strange as this may sound (I may be using the tool wrong), I really hate marking things off my list like at the grocery store. It takes so dang long to mark off a list while pushing a shopping cart and fumbling with a phone. I now print my list out and cross things off with a pen because its so way less frustrating... This may be an edge side use case, but still... the phone apps (both droid and iPhone) are not wonderful.

I fully subscribe to the belief that I am a weirdo, and these are really just my perceptions and random thoughts. I have remained an active, albeit reluctant user, and at this point plan to stay that way for at least a little bit longer. I always used to joke that if evernote, things, and dropbox ever merged, I would happily pay double. These days though, I am looking to support my own stand alone instances of these types of tools without being tied to 3rd party cloudy services so tightly.

2
veidr 17 hours ago 4 replies      
As a paying user for years, I've had Evernote lose data many times -- sometimes important, irreplaceable data that I hadn't yet had time to back up elsewhere.

Evernote is some of the very worst software that has ever survived more than a few months on my computer without being deleted. Horrible show-stopping crash/data loss bugs are the norm, and have been increasing steadily as they add feature after feature with apparently no quality control at all.

Fundamentally, the job Evernote does (for me, but I assume also for most users with thousands of notes) is too important to delegate to a halfassed vc-backed startup that flies its engineers economy and has never heard of an integration test.

But replacing it isn't yet possible. It syncs across all platforms I use, does OCR of everything in both Japanese and English, including handwriting and text in photos, works out of the box with all my paper document scanners... There's just nothing else on the market (or if there is, PLEASE TELL ME!!) that does all that.

So Evernote hasn't lost me as a customer, yet. They've seemingly made a spectacular effort to do so, but... Life without Evernote would still be, on balance, more painful than with it.

But life with it is indeed pretty fucking painful, too.

3
GraffitiTim 20 hours ago 5 replies      
I've been an avid Evernote user since the beginning (one of the first few thousand users). I use it to record all sorts of ideas, thoughts, notes, reminders, research, and references.

One year ago, my girlfriend was using Evernote (on my suggestion) to write her travel journal on our trip to Southeast Asia. I saw her note sync a bunch of times (the iOS app shows a little blue arrow when it's uploading). But one day she opened it and the note was gone. I contacted support but they couldn't do anything. (They offered her a year of free Premium service and "apologized for the inconvenience".)

Since then, I've stopped recommending it to people because I don't want to feel personally responsible if they lose notes too. I also have a tinge of doubt every time I record important information. My biggest worry is Evernote quietly losing a note, because once I record something in Evernote I typically push it from my internal memory.

On top of that, their iOS app is incredibly slow. When I want to quickly jot an idea down, it's very inconvenient.

I've started using SimpleNote lately, which is far faster, but I don't know to what extent I should trust it to keep my data safely.

4
farnsworth 21 hours ago 3 replies      
I remember the day a year and a half ago when I went out apartment hunting in a new town, looking at my notes on apartments in Evernote's Android app. It was a complex note, with lots of text in deep hierarchies of bullet points. At one point I tried to edit it, and after a few visual glitches, the text of the note disappeared. Then it synced, and there was no undo or history option in the app as far as I could tell.

I was able to get the note back by driving back to my hotel, retrieving my laptop where the note was cached, and opening Evernote while offline to ensure it wouldn't sync and wipe out that copy. Pretty frustrating. I've learned some tough lessons about cloud services and free stuff.

5
elbenshira 20 hours ago 1 reply      
I'm a premium Evernote member, but I've also had lots of problems.

The iOS app is slow and clunky. I hate using it. It crashes all the time, especially when I'm trying to take snapshots of a document with many pages (and all previous snapshots simply disappear).

The desktop app is better, but they really could improve the writing experience. Pasting HTML blobs is impossible, and so is formatting my notes the way I want (I use TextExpander for sanity).

Evernote is great when it works, but they really need to fix their stability and bug problems.

6
Alex3917 14 hours ago 3 replies      
If you store data in a format that's not future proof, using software that's not open source, then you don't really get to complain when your data disappears. Especially if you're not storing the files locally and making regular backups.
7
zmmmmm 3 hours ago 0 replies      
Disturbing to see the number of reports of data loss. I am using Evernote heavily for my PhD research. I haven't found another tool that works as well as Evernote for this.

I've never had data loss, but I was very disappointed by my one interaction with Evernote support - a simple bug report, (you cannot select more than one line of text in a bullet list in the Android app), turned into a series of 6 or 7 email interactions asking me to do things that were unrelated to the problem and clearly weren't going to (and didn't) help. It was obvious that no human had bothered to even attempt to reproduce the issue or even read my bug report in any detail. I don't know if they've outsourced their bug report handling to some untrained / unskilled off shore group, but if not they were trying extremely hard to emulate that. I don't like to think about interacting with these people in the event that I have data loss or other kind of bug that actually matters.

8
tempestn 20 hours ago 4 replies      
I really hope Evernote's take-away from this is that they need to scale back development on all their auxiliary stuff - hello, food, whatever, as well as all but the most critical feature requests, and focus as much as possible on making the core experience bulletproof. I would _hate_ to have to give up Evernote, but like others here, am extremely apprehensive about the possibility of losing data.

One stop-gap they might be able to implement quickly would be a scale-up of their version control. They could throw money (storage space and bandwidth) at the problem, increasing the number and frequency of revisions stored. Certainly not as good as preventing loss in the first place, but reliable versioning would help minimize catastrophic loss in the meantime, and would still continue to be valuable once things are more stable.

9
gmu3 21 hours ago 1 reply      
This. I'm always particularly annoyed by the tech support when I've tried to submit bug reports. One time I found a reproducible bug in the Chrome Clipper and even offered a possible explanation/solution for what was happening and the person first insisted that it wasn't happening. I couldn't believe he was telling me what wasn't happening on my screen when I was looking right at it. I pay for prime so next requested to be put in contact with a developer to submit a bug report and was denied. Finally like the author they asked me for activity logs which I also refused to fork over because they seemed too personal so instead I just put up with a buggy clipper. I wish they focused less on selling socks and more on the software. [https://www.evernote.com/market/feature/socks?sku=SOCK00106]
10
lhl 20 hours ago  replies      
I've been a paying customer almost from the start. Unfortunately, as Evernote has expanded, it's gotten less and less useful for me.

Their web clipper is great, the best around IMO (especially since Clipboard folded), however there's no way to exclude those clipped pages from search, so after using the clipper for a while, searching for just about any phrase is mostly irrelevant results. Ideally it'd be possible to filter by source or have default searches to exclude certain types of content.

Another example of this is that I have a well-curated and geotagged Travel Notebook (this was actually much harder than it should have been since their geocoder is picky and you can't really massage it). I'd love to be able to see these notes on a map, but the "Atlas" map view that Evernote provides doesn't let you filter by notebook (or anything really).

Evernote does a great job of making it fairly painless to capture notes and despite the author's problems, has generally worked well on syncing everything. It's never done a good job for triaging/filing/finding or organizing notes though, and it seems to simply get worse as you use it more (and with each redesign). Evernote seems to want to encourage you to put "everything" into it, but as you do, it becomes harder and harder to get what you need out of it. Honestly, I'm baffled at how the Evernote devs/designers use it.

11
Bzr is dying; Emacs needs to move gnu.org
390 points by __david__  2 days ago   298 comments top 3
1
hyperpape 2 days ago 9 replies      
I went back and looked at the older discussion, and it doesn't paint Stallman very well as the head of a project. He pins the question of whether to keep using bzr not on whether it is good or whether the Emacs developers want it, but on whether it's being "maintained".

But then he seems to define maintenance as having fixed a specific bug that's been around for over a year, blocking a point release.

He admits that he can't follow the developers list to see if they're genuinely doing active maintenance (reasonable enough: he has a lot on his plate), but also won't accept the testimony of Emacs developers that the mailing list is dead and there's no evidence of real maintenance.

When questioned, he says that there's too much at stake to abandon bzr if it can be avoided at all. But the proposed replacement is GPL software. This is just madness.

Refs: http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg009....

http://lists.gnu.org/archive/html/emacs-devel/2013-03/msg008...

(and surrounding posts).

2
stiff 2 days ago 2 replies      
FWIW, just a few days ago I was browsing through the Emacs Bzr repository - after a full bzr clone, that took ridiculously long as well, a simple bzr blame takes 40-60 seconds to execute locally, and I have an SSD drive, four-core intel i7 and 8GB of RAM. I have never seen this kind of slowness with Git, with any repository size.
3
mikro2nd 2 days ago  replies      
>git won the mindshare war. I regret this - I would have preferred Mercurial, but it too is not looking real healthy these days

I confess that my perception of Mercurial is the diametric opposite of the author's. Recently I believe I have seen a modest resurgence of interest in Hg and increased uptake. Am I just seeing this through some peculiar VCS-warped glasses?

I believe that much of the popularity of git stems from github making it very easy to adopt, something that bitbucket doesn't seem to have pulled off as well.

12
On Hacking MicroSD Cards bunniestudios.com
383 points by fernly  6 days ago   68 comments top 20
1
josh2600 6 days ago 3 replies      
I'm not much into hero worship, but if you guys don't know Bunnie you should really take 5 minutes to understand who wrote this article. Bunnie is a hardware monster of the best kind and an EFF 2012 Pioneer award winner.

He's a hacker's hacker.

2
kabdib 6 days ago 2 replies      
It's amazing how much firmware has these back doors, where the engineers responsible have one or more of the following justifications:

- "I don't care, this is just my job. And I was told to do it by management." [what can I say? This sums up a lot of grunt coders I know]

- "What are the chances that anyone will find this?" [lack of appreciation for how smart and dedicated attackers can be]

- "So what if they do? It's not like it's useful" [lack of proper analysis]

- "How else are we going to run tests?" [poor design / fear]

- "Huh?" [absolutely oblivious about security]

I've worked on projects where we made the very conscious choice to leave doors like this open, but I doubt that most firmware shops are that intentional about it.

3
radicalbyte 6 days ago 4 replies      
So now my microSD card has as CPU 100x faster than my first computer (C64), and access to storage at least 10^5 times larger. Amazing.
4
Udo 6 days ago 0 replies      
For me, the big take-away here is not that SD cards have firmware that can be reprogrammed, but that there's apparently an opening for a comparatively high performance, cheap Arduino competitor. Being decidedly on the software side of things, I have to admit I was surprised to see that a 100MHz core with loads of memory could be produced for just a few cents now. There are probably dozens of low-cost places where fabrication of such a SoC would be only a minimal departure from churning out flash cards. I'd say let's do exactly that!
5
SwellJoe 6 days ago 1 reply      
So, this has potentially interesting value for implementing secure storage (assuming one can replace the whole firmware with something trusted).

I assume it would be possible to, for instance, make every "delete" operation a secure delete operation...wherein data gets overwritten a specified number of times. Shortening the useful life of the device, sure, but if security matters, that's a small price to pay.

Going further, what about a handler that serves out one set of data about what's on the device to any random person that plugs it in (like empty or with a few harmless photos or something), and another set of info to someone that has a key? Sure, for a high capability attacker, they might even know about this kind of firmware magic and know how to circumvent it, but it would make it very unlikely that some random person picking up your device would find anything that you want to keep secret.

Obviously, if your data is encrypted on the host system before writing to the card, that's reasonably safe...but for people in really dangerous situations, where torturing someone to obtain their key is not out of the question, making it seem like there's no data to obtain a key for is the best of all possible solutions.

6
Spittie 6 days ago 1 reply      
It's kinda scary how many microprocessor and different firmwares are needed/used in nowadays computer/hardware, and how each one of them add a new point of failure.

I was reading just today a similar article, but involving HDDs instead of Microsd cards (and even with a PoC): http://spritesmods.com/?art=hddhack

7
briandon 6 days ago 1 reply      

  Its as of yet unclear how many other manufacturers leave  their firmware updating sequences unsecured. Appotech is   a relatively minor player in the SD controller world;  theres a handful of companies that youve probably never  heard of that produce SD controllers, including Alcor   Micro, Skymedi, Phison, SMI, and of course Sandisk and   Samsung.
Which begs the question: so why target Appotech rather than Sandisk or Samsung?

8
nona 6 days ago 2 replies      
This and the article on Der Spiegel [1] mentioning how the NSA has a whole catalog of custom firmware for all major HDD makers tells me never to yield to the temptation of relying on built-in hardware-based full disk encryption.

[1]: http://www.spiegel.de/international/world/catalog-reveals-ns...

9
ChuckMcM 6 days ago 0 replies      
Excellent article. I wrote a simple SDIO driver for the STM32F4 and have three different MicroSD cards to test it with (they all behave slightly differently) and its clear that such systems "working" is a small miracle in itself :-) All the vagaries of implementation.
10
gwu78 6 days ago 1 reply      
From Bunnie's page on his "open laptop" project:

"I'm shy on the idea of just selling it to anyone who comes along wanting a laptop. I'm worried about buyers who don't understand that "open" also means a bit of DIY hacking to get things working, and that things are continuously under development. This could either lead to a lot of returns, or spending the next four years mired in basic customer support instead of doing development; neither option appeals to me. So, I'm thinking that the order inquiry form will be a python or javascript program that has to be correctly modified and submitted via github; or maybe I'll just sell the kit of components..."

I hope he chooses the latter option.

If Bunnie is a "hacker's hacker" as someone else suggested in this thread, then I am confused why he believes the proper hoop to make a fellow "hacker" jump through is making sure they know some JavaScript or Python and how to upload to Github.

I thought "hacker's hackers", especially hardware hackers, were not the type to follow the path of least resistance, namely, JavaScript, Python and Github. Whereas, assembly and C (and FORTH, APL, Lisp, etc.) are the languages of the "hacker's hacker".

But that's just me. Maybe I am the only one. If so, pay no mind.

11
K2h 6 days ago 2 replies      
It is a little dated, but this doc shows 200mA required for the card on highspeed writes. I was curious how much power needed to run that little uC.

http://media.digikey.com/pdf/Data%20Sheets/M-Systems%20Inc%2...

12
revelation 6 days ago 1 reply      
You can download the talk from here:

http://wtf1.muling.lu/30c3/Saal_1/Day_3/5294-30C3_-_5294_-_e...

(This is a streamdump, so don't expect seeking to work, and it might cause issues for your player)

13
baruch 5 days ago 1 reply      
I'm not quite sure what is so special here. It is a device, it has firmware, the firmware can be upgraded. The same is true for your HDD or SSD. Why is an SD Card any different?

If someone hands you an SSD in an external enclosure do you automatically suspect it too? A similar hack is known to work there, witness the number of SSDs that needed a firmware upgrade after their field release.

I do applaud the finding of how to do it and the proof that it really does work. It is a nice work in that regard and I have a few SD cards I'd be happy to hack their firmware for fun if nothing else (damn fake SDs, if they at least just advertised their real capacity they could at least be useful).

14
pedrocr 6 days ago 0 replies      
Just another reason why we need to start getting direct access to the underlying flash instead of relying on vendors to provide a bunch of unupdatable translation software. This is particularly the case with SSDs where the end result of all this is "just buy Intel SSDs if you value your data" with the corresponding price premium.
15
voltagex_ 5 days ago 0 replies      
http://www.youtube.com/watch?v=r3GDPwIuRKI is one of the recordings of the talk. CCC will have others in free formats, if you prefer
16
analog31 6 days ago 1 reply      
Et Tu, USB flash drives?
17
pasbesoin 6 days ago 2 replies      
I've only read part way through, but good grief, you owe it to yourself to read this. Also, in retrospect, it seems obvious. Nonetheless...

Not having finished the article, one of my initial thoughts: I guess my thoughts and intuition were right. It's not time to throw away those optical disks (and drives), yet.

18
blinkingled 6 days ago 0 replies      
> You are not storing data, you are storing probabilistic approximation of your data

Ha!

19
tommis 5 days ago 1 reply      
So, could this mean that one could theoretically wire a MicroSD card directly into ethernet plug and with some voodoo harness PoE to create an ethernet plug with busybox on it?
20
chippy 6 days ago 0 replies      
This is an extremely well written blog post. It should set the standard. Bravo!
13
Snapchat Phone Number Database Leaked snapchatdb.info
380 points by lightcontact  4 days ago   213 comments top 10
1
antimatter15 3 days ago 7 replies      
The top comment on Reddit r/netsec's corresponding coverage has mirrors on Mega.co.nz for the files [1]

I couldn't find my own data in the set, and actually it seems like lots of entire area codes are missing.

Assuming `cat schat.csv | uniq | cut -c1-4 | wc -l` is the proper command, there are only 76 of 322 [2] US area codes represented.

It appears there are two Canadian area codes represented in the database: 867 and 204. There are also 248 US area codes which are not represented in the database. Assuming a relatively uniform distribution of phone numbers in the US (which is not at all a safe assumption), the average US snapchat user has better odds of not being in the list than being in it. Sampling from the set of my snapchat friends who are not in my area code, 3 of 13 can be found in the database.

If your phone number is in any of these states, you're not in the database: AlaskaDelawareHawaiiKansasMarylandMississippiMissouriMontanaNebraskaNevadaNew HampshireNew MexicoNorth CarolinaNorth DakotaOklahomaOregonRhode IslandUtahVermontWest VirginiaWyoming

[1] http://www.reddit.com/r/netsec/comments/1u4xss/snapchat_phon...

[2] I'm matching a regex against this list http://en.wikipedia.org/wiki/List_of_North_American_Numberin...

2
cenhyperion 3 days ago 1 reply      
Just like to remind everyone that snapchat was aware of this exploit and dismissive in regards to it.

http://www.theverge.com/2013/12/27/5249304/snapchat-dismisse...

3
rdl 3 days ago 0 replies      
Possibly they shouldn't have pissed on the people who notified them of the vulnerability, and on the journalists who broke the story?

(aside from not being vulnerable to this in the first place, but that actually is a lot to ask. I still can't believe anyone relied on the Snapchat model of security more so than any other app, although from an ease of use, non-security perspective, sure, it's reasonable.)

4
aheilbut 3 days ago 3 replies      
I guess I'm dating myself, but didn't we used to call that the phone book?
5
ufmace 3 days ago 0 replies      
Anyone else tried putting together some stats from the info?

                     name                     | areacode | count  ----------------------------------------------+----------+-------- Chicago Suburbs                              | 815      | 215953 Eastern Los Angeles                          | 909      | 215855 San Fernando Valley                          | 818      | 205544 Southern California                          | 951      | 200008 Los Angeles                                  | 310      | 196183 Northern Chicago Suburbs                     | 847      | 195925 Denver-Boulder                               | 720      | 188285 Downtown Los Angeles                         | 323      | 168565 New York City                                | 347      | 166374 New York City                                | 917      | 165420 Fort Lauderdale                              | 954      | 153522 Northern New York                            | 315      | 147447 Buffalo                                      | 716      | 144939 Southern Illinois                            | 618      | 144280 Boulder-Denver                               | 303      | 139265 Southern Michigan                            | 617      | 138821 Northeastern New York State                  | 518      | 138043 Champaign-Urbana                             | 217      | 135837 Oakland                                      | 510      | 130531 Miami                                        | 786      | 117906 Westchester County, NY                       | 914      | 116632 Western and Northern Colorado                | 970      | 115378 San Francisco                                | 415      | 108883 Miami                                        | 305      | 104415 Southeastern Colorado                        | 719      | 102932 Manhattan                                    | 646      |  96646 Mountain View                                | 650      |  94430 Chicago                                      | 312      |  70709 Southwest Connecticut                        | 203      |  60629 Bronx, Queens, Brooklyn                      | 718      |  51086 Boston                                       | 857      |  41857 Central Arizona                              | 480      |  35631 South Carolina                               | 864      |  33034 Eastern Ohio                                 | 330      |  32721 Arkansas                                     | 870      |  28940 Idaho                                        | 208      |  26827 Southeastern Virginia                        | 757      |  21170 Los Angeles                                  | 213      |  13705 Southeastern Ohio                            | 740      |  11597 Eastern San Francisco                        | 209      |  11356 Seattle                                      | 206      |  10623 Fort Lauderdale                              | 754      |  10131 Maine                                        | 207      |  10126 Northern Louisiana                           | 318      |   9842 Indianapolis                                 | 317      |   8151 Northwestern Arkansas                        | 479      |   7300 Manitoba                                     | 204      |   7211 Minnesota                                    | 320      |   7162 Southeastern Michigan incl. Ann Arbor        | 734      |   7077 Eastern part of Southern New Jersey          | 609      |   6952 Pennsylvania                                 | 484      |   6314 Manhattan                                    | 212      |   3970 Pennsylvania                                 | 610      |   3930 Southern New York State                      | 607      |   3437 Central Florida                              | 321      |   3258 New York City                                | 929      |   2651 Florida                                      | 863      |   2642 Southeastern California                      | 760      |   2523 Southwestern Wisconsin                       | 608      |   2217 Central Texas                                | 325      |   1542 Central Georgia                              | 478      |   1396 Western Central Alabama                      | 205      |    825 Eastern Kentucky                             | 606      |    565 DuPage County, Illinois                      | 331      |    512 Eastern part of central New Jersey           | 732      |    507 South Dakota                                 | 605      |    375 Knoxville, Tennessee                         | 865      |    263 Southwestern Connecticut                     | 475      |    253 Eastern Iowa                                 | 319      |    198 Georgia                                      | 470      |    163 Minneapolis                                  | 612      |    103 San Fernando Valley, LA                      | 747      |     84 Canadian territories in the Arctic far north | 867      |     31 Washington DC                                | 202      |      3 Georgia                                      | 762      |      2 Dallas                                       | 469      |      1
I wonder where they were getting the numbers to search by from. From how they described the vulnerability, I would have thought they would just iterate through all possible phone numbers. If they're doing that, it's strange how there's exactly 1 number for the dallas area code.

6
untog 3 days ago 3 replies      
Not at all surprised. Anyone that used the app would be suspicious of the backend behind it. Should have taken that $3bn while you had the chance.
7
scaramanga 3 days ago 2 replies      
CSV: magnet:?xt=urn:btih:bab9548c3770188c70d27ded9b22348f5b979713&dn=Snapch at+database+CSV&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80 &tr=udp%3A%2F%2Ftracker.publicbt.com%3A80&tr=udp%3A%2F%2Ftrack er.istole.it%3A6969&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp% 3A%2F%2Fopen.demonii.com%3A1337

SQL: magnet:? xt=urn:btih:f7b1cec6280edb8169d63550ba2dfb224df7810d&dn=Snapch at+database+SQL&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80 &tr=udp%3A%2F%2Ftracker.publicbt.com%3A80&tr=udp%3A%2F%2Ftrack er.istole.it%3A6969&tr=udp%3A%2F%2Ftracker.ccc.de%3A80&tr=udp% 3A%2F%2Fopen.demonii.com%3A1337

Both: magnet:? xt=urn:btih:fae9c0a8b2eee2f9cc31c713f21a4cda4083612b&dn=Snapch at+Database+CSV+%26amp%3B+SQL&tr=udp%3A%2F%2Ftracker.openbitto rrent.com%3A80&tr=udp%3A%2F%2Ftracker.publicbt.com%3A80&tr=udp %3A%2F%2Ftracker.istole.it%3A6969&tr=udp%3A%2F%2Ftracker.ccc.d e%3A80&tr=udp%3A%2F%2Fopen.demonii.com%3A1337

8
aabalkan 3 days ago 1 reply      
It's taking too much time to download each file even they're 40 MB. I wish they put it on as torrent in the first place.

Regarding the leak, yeah, that actually happens when you focus on the product but security and reliability of your system. Snapchat, Whatsapp and many others are hacked numerous times and yet it still happens.

9
sschueller 3 days ago 1 reply      
I still don't understand why you would turn down $3 billion. How will you ever make money with snapchat and how is it not a fad that will eventually die?
10
gibsonsecurity 3 days ago  replies      
For the record we don't know about SnapchatDB.

But it was a matter of time until this happened, the exploit still works with minor modifications, you just have to be smart about it.

14
I fought my ISP's bad behavior and won erichelgeson.github.io
377 points by helfire  4 days ago   104 comments top 19
1
JoshTriplett 4 days ago 2 replies      
Very nicely done: reporting this as abuse to the companies offering these affiliate programs seems quite appropriate, and it sounds like they reacted appropriately. One person complaining to an ISP is noise; one person making an abuse report is all it takes to get that ISP banned from the affiliate program.
2
afhof 4 days ago 3 replies      
Cox does something similar but bypasses the the DNS records and just slipstreams in a response. I noticed Cox would redirect javascript requests to their own HTTP server and put in their own snippets, effectively doing mass javascript injection.

The snippet ended up being some sort of alert about upcoming maintenance, but using a malicious technique for a benign purpose is the path to the dark side. Use HTTPS!

(I use 8.8.8.8, it didn't help)

3
gpcz 4 days ago 0 replies      
The cynical side of me says that the ISP is just going to redirect the author's traffic to the "pure" DNS server in the future (even when he or she directs traffic to the main one) unless they get in serious enough trouble with one of the companies this first time.

If anyone wants to do this in the future, I'd recommend just sending affiliate abuse emails with no notice to the ISP. Also, the future person may want to revise the [2] script to scan in a more surreptitious manner (change the order, add delays, simulate legit web traffic, etc).

4
jauer 4 days ago 3 replies      
As a ISP when we were considering using Aspira they claimed that no referral tokens would be replaced and that the only behavior was injecting a popup coupon window.

I decided not to proceed with it because it seemed like a support nightmare and tampering with non-malicious subscriber traffic crosses a line.

Their marketing affiliates (such as Cash4Trafik) are always reaching out to CEO types at small ISPs and the money they bring (particularly when you are small) can be hard to pass up.

5
lambda 4 days ago 2 replies      

  This also shows a weakness in DNS. There is currently no   way to validate the DNS record youre being served is what   the person hosting the website intended.
That's what DNSSEC is for, but it hasn't become pervasive enough yet to be able to depend on it.

6
zquestz 4 days ago 1 reply      
Eric, I am very sorry to see this happen to you. Unfortunately more and more companies are using our data for marketing purposes.

All is not lost though.

There are several ways you can protect yourself from these practices. The first thing I would do is get a router capable of using dnscrypt-proxy (http://www.opendns.com/technol.... Then you can be confident that your DNS traffic is not being modified by your ISP. It does require that you have trust in a 3rd party DNS provider like OpenDNS, but at the end of the day you have to trust someone to provide DNS lookups.

The second option is to setup DNSSEC so that you can verify where your DNS responses are coming from. While people will still be able to intercept what sites you're looking up, at least you know you're getting valid responses which is better than your situation is currently.

Third is to use both. =)

Anyhow, really awesome to see people standing against these practices. It takes users complaining to make change. The sad truth of the matter.

7
dmourati 4 days ago 2 replies      
Super shady stuff. I never rely on any ISP provided DNS servers. I'm glad you talked to the the etailers to let them know what was going on. These business practices do introduce latency, regardless of what he told you. Not to mention, they are highly unethical and dishonest.
8
sloop 4 days ago 3 replies      
If your ISP and/or Aspira were making any significant amount of affiliate commissions, I would be surprised if the merchants do not take action against them for fraud.

This sounds like the same behaviour that Shawn Hogan got in trouble for with cookie stuffing http://en.wikipedia.org/wiki/Shawn_Hogan

9
gnu8 4 days ago 1 reply      
Is there a way we can choke companies like Apira by making a concerted distributed effort to disrupt the referral programs they exploit (either by reporting them or by feeding them false referrals somehow)?
10
tdumitrescu 4 days ago 1 reply      
"I will continue to monitor periodically their DNS entries and compare them with other public DNS servers."

This would make for a great watchdog site to provide visibility across different ISPs (and could also discourage other ISPs from pulling this crap).

11
rcfox 4 days ago 1 reply      
One a slightly related note, in Chrome extensions, it's possible to redirect DNS requests on a per-URL basis. This is how Media Hint works to allow non-US Netflix users access the US version of the site.

I'm surprised we haven't seen similar behaviour from Chrome extensions. I'm sure it would be caught eventually, but this isn't exactly something that people tend to look for, so it would take a while for people to catch it.

12
neil_s 4 days ago 2 replies      
Interestingly, you might have benefitted more from keeping quiet about this. While the original retailers are losing money through this, you aren't really affected negatively by them doing it. In fact, with this additional revenue source, they might be able to support thinner margins on their broadband charges, saving you some money. You did the morally correct thing, but perhaps at a potential personal cost.
13
natch 4 days ago 3 replies      
I'd like to try out this curl command. I'm not using macports, though. Like many people, I've switched to brew since some time. Is there a quick way to see if my curl install is compiled with 'ares' whatever that is?
14
AlonsoGL 3 days ago 2 replies      
Here it goes:Behind a ISP-wide cache.Any 'traceroute' passes by transtelco.net (ISP used to have their own infraestructure for voip services Megafon) now i have 5/6? DNS jumps! and all my traffic going to Transtelco.

  traceroute to news.ycombinator.com (198.41.191.47), 30 hops max, 60 byte packets  1  customer-GDL-**-***.megared.net.mx                 << 177.230.**.*** Dynamic IP, GDL is the city of the company  2  10.0.28.62 (10.0.28.62)  8.939 ms  8.941 ms  8.935 ms  3  10.2.28.195 (10.2.28.195)  8.912 ms  8.903 ms  8.891 ms  4  pe-cob.megared.net.mx (189.199.117.***)  8.878 ms  8.866 ms  14.201 ms << COB is the user city  5  10.3.0.29 (10.3.0.29)  23.494 ms  23.483 ms  23.408 ms  6  10.3.0.13 (10.3.0.13)  22.842 ms  19.609 ms  19.596 ms  7  10.3.0.10 (10.3.0.10)  19.560 ms  19.555 ms  19.536 ms  8  201-174-24-233.transtelco.net (201.174.24.233)  19.527 ms  20.650 ms  19.468 ms  9  201-174-254-105.transtelco.net (201.174.254.105)  34.239 ms  31.793 ms  31.268 ms  10  fe3-5.br01.lax05.pccwbtn.net (63.218.73.25)  31.792 ms  31.736 ms  33.533 ms  11  any2ix.coresite.com (206.223.143.150)  32.834 ms  33.221 ms  33.429 ms  12  ae3-50g.cr1.lax1.us.nlayer.net (69.31.124.113)  41.288 ms  41.228 ms  41.231 ms  13  ae2-50g.ar1.lax1.us.nlayer.net (69.31.127.142)  42.632 ms ae1-50g.ar1.lax1.us.nlayer.net (69.31.127.138)  35.192 ms 33.860 ms  14  as13335.xe-11-0-6.ar1.lax1.us.nlayer.net (69.31.125.106)  35.143 ms  44.714 ms  44.666 ms  15  198.41.191.47 (198.41.191.47)  37.638 ms  37.239 ms  36.997 ms
I don't know how normal or ethic is this type of cache. No download limits, I have the 10mb and get 20mb(2000-2300kbps) downloads, for uploads is limited to 1mb.

15
samweinberg 3 days ago 0 replies      
Anyone know if Time Warner Cable does this?
16
GigabyteCoin 3 days ago 0 replies      
Congratulations. What they were doing was absolutely evil in my opinion.
17
ozh 4 days ago 0 replies      
+1 to OP, and +2 to companies who responded positively (and -3 to ISP, obviously)
18
_RPM 4 days ago 1 reply      
Gaming the system seems to be the secret to winning.
19
philip1209 4 days ago  replies      
This is why you should encrypt your DNS.
15
DigitalOcean leaks customer data between VMs github.com
374 points by sneak  5 days ago   200 comments top 8
1
AaronFriel 5 days ago 4 replies      
This is a huge problem and there seems to be a good deal of misinformation about this issue that has confused things. I'm going to debunk two things: first, that DigitalOcean is not violating user expectations (they are), and second, that doing this correctly is difficult (it isn't). The tl;dr is that if DigitalOcean is doing this, they are not using their hardware correctly.

First, it's not uncommon for virtual disk formats to be logically zeroed even when they are physically not. For example, when you create a sparse virtual disk and it appears to be XGB all zeroed and ready to use. Of course, it's not. And this doesn't just apply to virtual disks, such techniques are also used by operating systems when freeing pages of memory - when a page of memory is no longer being used, why zero it right away? Delaying activities until necessary is common and typically built in. Linux does this, Windows does it [http://stackoverflow.com/questions/18385556/does-windows-cle...], and even SSDs do it under the hood. For virtual hard disk technology, Hyper-V VHDs do it, VMWare VMDKs do it, sparse KVM disk image files do it. Zeroed data is the default, the expectation for most platforms. Protected, virtual memory based operating systems will never serve your process data from other processes even if they wait until the last possible moment. AWS will never serve you other customer's data, Azure won't, and none of the major hypervisors will default to it. The exception to this is when a whole disk or logical device is assigned to a VM, in which case it's usually used verbatim.

This brings me to the second issue. Because using a logical device may be what DigitalOcean is doing, it's been asked if it's hard for them to fix it. To answer that in a word: No. In a slightly longer word: BLKDISCARD. Or for Windows and Mac OS X users, TRIM. It takes seconds to execute TRIM commands on hundreds of gigabytes of data because, at a low level, the operating system is telling the SSD "everything between LBA X and LBA X+Y is garbage." Trimming even an SSD with a heavily fragmented filesystem takes only a matter of seconds because the commands to send to the firmware of the SSD are very simple, very low bandwidth. The SSD firmware then marks those pages as "free" and will typically defer zeroing them until use. Not only should DigitalOcean be doing this to protect customer data, but they should be doing it to ensure the longevity of their SSDs. Zeroing an SSD is a costly behavior that, if not detected by the firmware, will harm the longevity of the SSD by dirtying its internal pages and its page cache. Not to mention the performance impact for any other VMs that could be resident on the same hardware as the host has to send 10s of gigabytes of zeroes to the physical device.

Not only is DigitalOcean sacrificing the safety of user's data, but they're harming the longevity of their SSDs by failing to properly run TRIM commands to clean up after their users. It hurts their reputation to have blog posts like this go up, and it hurts their bottom line when they misuse their hardware.

Edit: As RWG points out, not all SSDs will read zeroes after a TRIM command, so other techniques may be necessary to ensure the safety of customer data.

2
xSwag 5 days ago 8 replies      
TL;DR: In the DigitalOcean web panel you can check the "scrub data" checkbox when destroying a VM. When using the API this option is not ticked. This can lead to other customers being able to retrieve your data.

The author thinks that this is a security issue because this option should be enabled by default. However, (I assume) it's not in Digital Oceans interest to do full disk scrub because it reduces the lifespan of their SSD.

If a user forgets to log out of Facebook on a public computer, is it Facebook's responsibility? Similarly, if a user does not correctly delete data on a budget host, is it the hosts fault?

3
sneak 5 days ago 2 replies      
Oh, hey guys, they've responded. It's no big deal, they just disabled the security because because _users were complaining_.

Turns out it "add[s] a very large time to delete events" when you actually delete things when a user makes an api call to DESTROY. Who knew?

http://i.imgur.com/MFW8ng6.png

4
nbpoole 5 days ago 1 reply      
Interesting: this sounds like a recurrence of the same issue which was described a number of months back:

https://www.digitalocean.com/blog_posts/resolved-lvm-data-is...

At the time, the blog post claimed that the issue was resolved and that data was now being wiped by default. I wonder why that would have changed.

5
tachion 5 days ago 4 replies      
This reminds me my own story: few weeks ago I was trying out their service and on newly created droplet I've noticed a... shell history of downloading and executing a shell script:

    1  clear    2  ls    3  clear    4  wget https://kmlnsr.me/cleanimage.sh    5  rm cleanimage.sh    6  cd /tmp/    7  wget https://kmlnsr.me/cleanimage.sh    8  chmod +x cleanimage.sh    9  ./cleanimage.sh
This looked very disturbing, so I went and check what that script is, and it is available to read for everyone, and seems to be a part of their provisioning procedure for the vm's, written by some guy who works for DigitalOcean as 'Community Organizer' (however, at that point I thought the website might be created by an attacker and misleading).

Not only it looks bad and alarming to customers, but also poses a security threat, where an attacker could target his website and/or server and replace the script with something nasty inside. How long before they'd notice such fact? No idea, but I've opened a ticket about it right on, giving them some advice on why its bad (availability, scaling, performance, security and PR reasons) but also how to better handle it, and it seems nothing has been done about it so far.

That rings a bell in my head not to use Digital Ocean service as things they do are looking pretty amateur.

6
sillysaurus2 5 days ago 1 reply      
There is a simple solution to this: don't trust providers to do what they say they'll do with your data. You should scrub any drive that's ever contained sensitive info before you throw it away, and terminating a VM instance is precisely equivalent to handing the VM's harddrive to your provider.

It's pretty easy nowadays to scrub a drive. Writing zeroes would suffice.

Personally, I'd worry more about what data is being leaked when your VM is paged to disk on your provider's servers. Parts of each of your VMs will probably reside in the pagefile at some point, so therefore writing zeroes won't save you if the provider has bad disposal practices (like not scrubbing before disposal). So it seems impossible not to have to trust a cloud computing provider whatsoever; some basic trust seems to be a requirement.

But that minimum level of trust should be the extent to which you trust them. Not scrubbing your drive before handing it over is placing faith where faith doesn't belong.

7
comice 5 days ago 1 reply      
Since day one, Amazon EC2 used a copy on write system with their LVM volumes to protect against this problem (without them having to do expensive zeroing operations).

This has been an identified and solved problem for YEARS. No excuse for a modern VPS/IaaS provider to be leaking customer data in this way, except incompetence.

8
jlawer 5 days ago  replies      
Talk about a link bait title. Its a bit hard to call it a leak,Its a configuration option that is well presented in the web UI. It is optional as it adds ~ 10 minutes of billing to the small 512mb VMs and as such it is optional if you do it.

If your using an overlay or API on top of a cloud or service, its the overlay's responsibility to ensure a consistency with your expectations. The API is consistent with the UI.

While other cloud providers accept the time that this takes as non-billable, DO don't. By getting higher utilization is how they are able to offer their prices and still have some modicum of service.

16
Lessons learned from my failed startup after 2 years, 300 users and 0 revenue sergioschuler.com
363 points by sergioschuler  4 days ago   159 comments top 13
1
patio11 4 days ago 5 replies      
This is a fantastic writeup (and like nearly all worthwhile writing on the subject, I don't necessarily agree with all of it).

Two elaborations:

1) General advice to non-technical founders, not specific to this post: If sales is one of your primary skill sets, and you cannot sell one developer on working for you, you may want to have a brief heart-to-heart with yourself on whether you are sufficiently skilled at selling to build a company which will live or die based on your sales ability.

2) His advice about starting with 1 anchor client for a SaaS, expanding to 10 via expenditure of shoe leather, and then starting to worry about scalable approaches to customer acquisition is very, very good. (I don't know if I definitely would endorse the "An Indian company expressed desire to buy something from me other than the thing I was building, so I should have built that instead." That would turn on a lot of things, including how serious that company was about actually buying the thing. There is a world of difference between "I would buy a Widget from you" and "I commit to accepting delivery of a Widget from you, where a Widget broadly does X, my timeframe is Y, and your payment will be $Z." I'd be looking for a letter of intent or a check as a filter for seriousness following that Skype call before making a bet-the-business decision on it, personally, but I obviously don't know the specifics of what was said.)

2
Killah911 4 days ago 0 replies      
I don't understand people's (not necessarily the OPs)utter obsession with philosophies. Especially in the startup world when being adaptive and surviving is key.

Lean Startup, great book, decent ideas, not the religion that it's become. I'm sick of hearing, hey do this the lean way and it'll "significantly improve" how well you do, after all it's the blueprint for success. Personally, I don't buy into that. Here's my view of success in reality: do whatever works (that's legal & up to your moral standards), be opportunistic and get lucky (yes, hard work and measuring metrics alone don't do crap).

MVP and idea validation are great concepts & helpful common language. In hindsight all "successful" startups seem to have a "pattern", but in all seriousness, there isn't a friggin algorithm for success in startups, otherwise algorithms would've replaced entrepreneurs a long time ago. (Although selling success patterns & software based on such to wantreprenuers is a great idea)

I'm sorry the Sergio's experience happened. It's easy force cause and effect onto a narrative. It very well could have been that the developer Sregio met was at a point in his life where he really just wanted to build something great and did end up building the awesomest thing. Instead of trying to dissect the reasons his startup failed, had luck been a little more favorable, we might be trying to analyze how it became a huge success.

Bottom line, my heartfelt congratulations to Sergio on being successful at stepping up, despite the risks and having a crack at it. If you had never stepped up and we all gave in to our negative biases and overanalyzed the crap out of everything before we started we'd still be polishing stone wheels.

I know how shitty it feels. But remember, hindsight is 20-20 and cause and effect should really be cause+luck and effect. Hope you're a better entrepreneur and will be back in the game soon.

3
ry0ohki 4 days ago 1 reply      
"The developer had no intention of being the projects developer (?) he was not really a developer, he was a computer science graduate who owned a webdev shop and was used to managing, not coding."

Heard this story so many times. Amazing how many people join a startup and don't want to do the actual work. Remember that scene in The Social Network where Mark Zuckerberg calls his outsource team about progress on that latest feature? No? Me either.

4
at-fates-hands 4 days ago 1 reply      
"Since we were 3 business people, we spent all this time into idiot plans, budget forecasts, BUSINESS CARDS, fancy website all useless things which in the end did not contribute to anything."

I've been apart of a lot of startups and this is far and away the best advice. It was a common theme with two startups I worked for during the boom years. One CEO's hubris was stunning. 10 million privately funded and he blew most of it on season tickets and suits at stadiums to "entertain" big prospects (nevermind we didn't have any "big" prospects at the time!), remodeled the office to the tune of a few hundred thousand dollars, it goes on, but you get the idea.

When you're in a startup, it really is about getting your product shipped, and making sure that's where the focus is.

Great writeup and glad you saw the errors of your ways. Lots of people never gain the wisdom you have until after two or three failed attempts.

5
wrath 4 days ago 0 replies      
Good article but I would look at this "failure" from a glass half full perspective. You "won" because you've learned valuable lessons you can take to your next idea. I've had many products that have not gained many users in their respective marketplaces but I learned from each and everyone of them. All these experiences has brought me where I am today (CTO of a 45+ employee company). No failures in my past as far as i'm concerned; just lots of self teaching (that you can't get in school).

>> ""An Indian company expressed desire to buy something from me other than the thing I was building, so I should have built that instead.""

I may be the minority but I agree with him but on one condition. If this Indian company wanted to pay a small monthly subscription fee for your product I would never have agreed developing "their" ideas. I would have taken their feedback and put in the big pile with all the other feedback I gathered up. But I would have pitched this Indian company a different story, I would have pitched them a professional services contract instead of a product. I did something similar in the past and it worked out very well because in a business money is king. With no money you can't do the things you need to do, like attend conferences to sell your idea, buying adwords, hiring solid developers, paying yourself a salary so you can devote your time to the idea.

In my case the customer was willing to pay ~$10k a month to get what he wanted. We built it for him while building our own product. Once we got big enough and could sustain ourselves without our original customer, I gave the customer away. The developer who maintained the project was interest in taking on the project himself. We came up with a 6 month transition plan, including lots of product/project management help, office space, etc. It was a win/win situation at the end.

Doing this is not for everyone though. There are many days I cursed this customers for taking up the majority of our resources. We had to be very good at differentiating between their requirements and the markets requirements. We weren't perfect at it but it worked out in the end.

6
al2o3cr 4 days ago 2 replies      
"Instead of surfing the wave and adapting my idea to what a real prospect client was telling me they wanted"

FFS don't do this. There are far too many startups beached on the shores of "well, this one SRS BZNS client wanted us to change what we were doing so we did. Where'd all the rest of our clients go?"

I'm not saying "don't pivot", but "just making what they wanted" (where N(they) = 1) turns you into a poorly-paid contract developer who's also paying to host the result, not an entrepreneur.

7
thu 4 days ago 1 reply      
Do people find it really ok to have video and a website spelling "try it free" and then have only an email input form ? I know that testing if demand exists is important, but doesn't it have adverse effect on your reputation to somehow lie to your prospects ?
8
PythonicAlpha 4 days ago 1 reply      
I want to shine some light on one side problem, scratched here:

The problem today is (out of the perspective of a developer): To many companies rely on just "hire any (cheap) developer" to ramp up the product. I see it all the time: Quality is not asked for, many companies (specially in the web business) just want the cheapest developers. They search for a student (at best), because he is cheap and will just make a small time estimation and an even smaller fixed price offer for the project. The student will happily work overtime that is not covered by the initial estimation.

Than the companies go mad, when either the programmer is running away or the whole project runs into a blind alley (or both at the same time), because the "totally expensive" programmer had not enough experience e.g. with database development and the database structure just lets you shiver. Then the shouting and anger is big: "Damn programmers -- all are liars and lazy!"

What went wrong, stated Uncle Bob correctly in his Blog: http://blog.8thlight.com/uncle-bob/2013/11/19/HoardsOfNovice...

But the "cheap, cheap!" culture seams to be unstoppable. If you tell people in advance about "quality" and "professionalism", they don't listen or just laugh at you. It seams, all the people just have to find out the hard way -- but I guess, even than most of them will not learn at all.

9
guynamedloren 4 days ago 1 reply      
Really great writeup, thanks for sharing.

I'm left wondering, though, what you actually did over the two years? You imply that you were working on it full time. Two years full time is a lot of time. You can do pretty much anything in that time (including, as others have mentioned, learn to code).

> idiot plans, budget forecasts, BUSINESS CARDS, fancy website [and writing articles]

I find it hard to believe you can work on those things for two years, day in and day out.

10
snorkel 4 days ago 0 replies      
... one of the prospects was an HR person from a huge Indian manufacturer. They wanted the system NOW and wanted to speak to me. [...] I just needed to build what they wanted.

I know startups that charged down the other path, being hyper responsive to their big customers, and they suffered for it because their biggest customer steered the product vision straight to crazy town. Such startups essentially become the contract development shop of a few big customers, living and dying by the whims of those big customers. Yes, you can pay the bills, but you're essentially working full-time for a few customers rather than building your own enterprise.

11
karterk 4 days ago 0 replies      
I think for first time bootstrappers, investing some time in a quality blog on a particular field you would like to build products for is really really useful. Apart from having a good audience to launch your first product, it helps you interact with people before you have something to sell to them. You learn more about their problems, the existing market, competition and so on.
12
subbu 4 days ago 2 replies      
__If there is just one thing you should learn, it is: Just speak to prospects and extract their pain, then sell the painkiller (before building the product). If they are willing to buy, do take their money and invest that money into building the product.__

This advice always seemed like a stretch to me. Does anybody pay for a product that's not ready yet?

13
Elizer0x0309 4 days ago  replies      
A business person trying to start a tech startup.... It's like a business person looking for musicians to start a band. This is beyond ridiculous. Either bring some skill to the table or go create a "business startup" and stop polluting the industry with yet another failed idea and even worse a "post mortem" of why it failed.

PS: This includes Marketing, Managers as well as the Business peanut gallery.

17
What Could Have Entered the Public Domain on January 1, 2014 duke.edu
354 points by Tsiolkovsky  4 days ago   160 comments top 10
1
donpdonp 4 days ago 2 replies      
While not a solution per se, an alternative exists. If the license for current works are unacceptable, start celebrating other works! Notably, works with a Creative Commons license.

Some Creative Commons cartoonshttp://www.seosmarty.com/15-cartoonists-that-allow-using-the...

Creative Commons Music at Jamendo (see the FAQ http://www.jamendo.com/en/faq)

edit: 'per-say' to 'per se' (thx ansimionescu)

2
kevando 4 days ago 6 replies      
For those curious, this is mostly a result of Disney.

http://www.washingtonpost.com/blogs/the-switch/wp/2013/10/25...

3
sentenza 4 days ago 1 reply      
In the EU, we have lifetime plus 70 years. So the first released movie of the Marx Brothers, "Coacoanuts" (1929) will enter the public domain in 2048, since Groucho lived to be 87.

System is broken. Please reboot.

4
possibilistic 4 days ago 1 reply      
The fairest idea I've come across concerning protecting copyrighted works from falling into the public domain is actually pretty simple: tax exclusivity after the initial 30 years has elapsed. If this tax is non-negligible, companies will be obliged to keep only their best IPs protected and will let everything else fall into the public domain.

The government taxes every other kind of property, so why not IP? Additionally, keeping created works out of the public domain is essentially a tax on the public; this intellectual levy placed on everyone should be balanced by a reinvestment in favor of public interests.

If Disney wants to keep Mickey Mouse out of the public domain, they should pay a yearly fee to prevent it from becoming public property. They'd more than make up for it with the revenue they garner.

I think that this would also encourage less wasteful use of copyrighted properties.

5
sheff 4 days ago 0 replies      
On a happier note, here is a list of authors whose works will be entering the public domain tomorrow in various parts of the world.

http://publicdomainreview.org/2013/12/10/class-of-2014/

6
huskyr 4 days ago 0 replies      
Another interesting tidbit about US copyright is the Uruguay Round Agreements Act:

https://en.wikipedia.org/wiki/Uruguay_Round_Agreements_Act

One of the effects of this act is restoring copyright in the U.S. to foreign works of authors that weren't dead for 70 years on january 1st 1996 in their home country. Instead, works only enter the PD 95 years after publication.

So for example, the last paintings by Theo van Doesburg, a Dutch artist who entered the public domain in the Netherlands in 2002, will only be out of copyright in the U.S. in 2026. And that's why you won't see those works on a site such as Wikipedia, that is under U.S. law.

7
pessimizer 4 days ago 3 replies      
If this stuff did start to enter the public domain after 28+28 years, the modern entertainment industry would be screwed because they would have to compete with it. Rationally, they'd rather it burned than free.
8
kriro 4 days ago 2 replies      
The irony that Atlas Shrugged is on the list and massively protected by government IP law is deliciously sad.

More interesting is that Tesla is part of the class of 2014 for 70 year countries :)50 year countries get some nice additions (some real heavyweights): Robert Frost, Sylvia Plath, William Carlos Williams, Louis MacNeice, Jean Cocteau, C. S. Lewis, Aldous Huxley

9
seandougall 4 days ago 0 replies      
My wife's response: "Although, really, Ayn Rand fan fiction does not sound like that much fun."
10
will_brown 4 days ago  replies      
I do not see a problem with indefinite copyright protections.

One of the assumptions is that everything being equal the same works would exist if it were not for copyright protections. However, I would argue without the extended copyright protections, most of these [future] classic works would not exist, simply because publishers/studios would not invest in the creation/distribution of the works initially. In other words, copyright protections encourages the creation of works.

The OP takes an opposing stance, suggesting if copyright protections existed historically it would have stifled the creation of many classic works. This may be the case in certain instances, but to make that argument one must have an in depth understand of what constitutes copyright infringement in a legal sense - including all defenses to infringement (i.e. derivative work, fair use, educational/news worthy use, ect...)- and make the argument on a case by case basis. Very few people have any idea of what constitutes copyright infringement - and even among legal scholars, practitioners and judges there is disagreement.

All I know is if you have ever created anything and had it stolen you understand the need for legal protection. Plus it would suck to live in a world where I am financially rewarding thieves because I can not distinguish if a work was original or a knockoff. Finally, legal protection is just that protection, there is nothing stopping copyright owners from giving away their works for free, in other words voluntarily releasing their work(s) to the public domain.

18
Losing Aaron: Bob Swartz on MIT's role in his son's death bostonmagazine.com
346 points by cjbprime  2 days ago   201 comments top 4
1
suprgeek 1 day ago 2 replies      
MIT Played a key role in Aaron's Death:http://gothamist.com/2013/01/15/aaron_swartzs_lawyer_mit_ref...

They refused to sign-off on any deal that did not involve Jail time. This was THE one point that weighed more on his mind than any else per the recorded statements of his partner.

MIT's pig-headedness in this aspect really destroyed any respect I had for that institution. JSTOR made a much more reasoned statement http://docs.jstor.org/summary.html - Clearly indicating that they had NO INTEREST in any further prosecution (since they were the primary wronged party).

2
Smerity 1 day ago 7 replies      
I'm still more disturbed by the laws in play.

Aaron was facing a cumulative maximum penalty of 35 years in prison.

The roommates of one of the Boston bombers was only facing 25 years in prison[1] if found guilty of helping Dzhokhar Tsarnaev dispose of a laptop, fireworks, and a backpack in the aftermath of the bombings.

I understand it's not a straight comparison, but no matter how I try to re-arrange those numbers in my head, I can't reconcile the impact to punishment.

[1]: http://en.wikipedia.org/wiki/Boston_Marathon_bombings#Dias_K...

3
tzs 1 day ago 2 replies      
Several comments have talked about 35 year or longer potential sentences.

Those big numbers come from simply taking the maximum possible sentence that can ever be given out for each charge, and adding them all up.

There are two things that make that unrealistic in most cases. First, the defendant is almost always charged with several similar or related crimes that have mostly the same elements. If convicted on more than one charge from such a group, they are only sentenced for one of the convictions.

Second, the sentence takes into account the severity of the particular acts that constitute the crime, and the prior criminal record of the defendant. To get the maximum possible sentence you'd need to have gone way beyond what ordinary violators of that particular law usually do, and you'd have to have a serious criminal history.

What Swartz was actually facing if he want to trial and was convicted was something ranging from probation to a few years, depending on just how much damage the court decided he caused.

If he took the plea the prosecutor was offering, he was facing up to 6 months.

Details with citations on the above are available at [1] and [2].

In the dozens of discussions of the Swartz case we've had in the last year here, the 35 year or 50 year myth has been repeatedly busted. Yet it keeps coming up in each new discussion--often from people who were in some of the previous discussions! Why is it so persistent?

[1] http://www.volokh.com/2013/01/14/aaron-swartz-charges/

[2] http://www.volokh.com/2013/01/16/the-criminal-charges-agains...

4
vex 2 days ago  replies      
Suicide is completely a personal choice. MIT had no reason to try and defend an outsider who hijacked part of their network, and trying to make them seem like they caused him to hang himself smacks of tunnel vision.

It's a natural response to a suicide; we try and search for something to blame. But unless you argue that MIT should have known Aaron was mentally unstable, saying MIT "caused" him to kill himself is illogical. People who commit suicide may desire to because of what they feel about their lives, but the final decision is one's own.

It's sad that it takes a death to bring attention to the IP issues that Aaron's trial had raised.

19
The NSA Reportedly Has Total Access To The Apple iPhone forbes.com
325 points by larubbio  4 days ago   201 comments top 6
1
JunkDNA 4 days ago 5 replies      
I know this headline generates traffic by being about the iPhone, but this is a minor point. The big message from Jacob's talk and the original articles in Der Spiegel is that the NSA can intercept anything. Period. Full stop. People have suspected such far reaching capabilities for some time. This talk and the articles demonstrate that it exists. I'm personally a little uncomfortable with this kind of disclosure. On one hand, the NSA exists for the express purpose of spying. That is their job. You can not like that the NSA is a spy organization and we can debate whether we should conduct spy operations as a society, but I'm not sure what exposing their methods in this level of detail does for advancing that debate. Did people expect them to be a spy organization that was incompetent? A group that makes crappy and obvious listening devices stamped with "Designed by the NSA in Maryland"? On the other hand, the cases of potential abuses and dragnet surveillance capturing everything indiscriminately are extremely worrying. I don't know how a free society can do all this spying in support of legitimate foreign policy goals and at the same time not grow into an out of control, unaccountable organization ripe for abuse.
2
RyanZAG 4 days ago 4 replies      
Aren't we missing a critical point here??

> "The initial release of DROPOUTJEEP will focus on installing the implant via closed access methods." [2007]

OK, we knew this much already. I remember seeing a number of stories on how law enforcement can pull data off an iPhone, etc. Not really much new here.

> "A remote installation capability will be pursued for a future release"

Here is the interesting bit. You don't put this in a document unless you have a good plan on how to do it. Obviously with iOS devices having ports closed and being behind NAT, the NSA can't exploit them remotely. However, the NSA is pretty clear that it will have the capability in the future. Note the date on this - 2007.

Since 2007, what has changed? iCloud allows Apple to install and run code directly on your device remotely. Is there any doubt that the NSA would request Apple give them full access to iCloud? So the real issue here is what that last little line hints at: the NSA was looking to get remote access rights to all iPhones back in 2007 and with the knowledge now that they will happily backdoor AT&T/Google/Microsoft to retrieve data, is there any doubt they are now using iCloud to gain remote access to all iPhones?

I'm sure NSA/Google does the same with Google Play Services.

3
forgottenpaswrd 4 days ago 3 replies      
"one question has been paramount for privacy advocates: How do we, as a society, balance the need for security against the rights to privacy and freedom? "

I hear this fallacy question again an again. It implies that giving total power to gobertment is "security". It is not.

Giving total control to Stalin meant hundred of millions of Russians got murdered in terror, giving total power to Hitler or Mussolini from democracies meant the total destruction of Germany and Italy with millions dead.

4
andr 4 days ago 0 replies      
I really see this working remotely, as long as you have control over a cell phone tower or you use a phony portable base station, both of which are within the NSA's reach.

The thing is phone baseband software (which is reused on different phone models and controls the phone's I/O including GSM, USB, etc.) has hardly ever been under attack. When the iPhone arrived with its new security model, baseband bugs became one of the major ways to jailbreak a phone. Those bugs have been fixed one by one, but they were mostly on the USB side - the GSM side has been impractical to attack. A carefully crafted GSM packet could in 2008 and probably could now cause a buffer overflow in the baseband and gain access.

An interesting presentation on the topic: http://www.youtube.com/watch?v=fQqv0v14KKY

5
rlx0x 4 days ago 1 reply      
Now the talk he gave was interesting, laying out some known and some new facts about the surveillance and automated attack capabilities of the NSA, particularity interesting is the targeting of infrastructure and their traffic injection systems. And he is right to make the point, that its particularly despicable that they actively sabotage infrastructure security, something everyone on this planet has to suffer from.

But.. I don't even know where to begin, its not only that we need to convince a large portion of the US population that living in a dystopian total surveillance state is actually not something to thrive for, we can't even begin to discuss those issues in any meaningful way when people have not the slightest clue whats really going on, even if leaks like this occur that outline frightening and utterly insane surveillance and attack capabilities nobody is going to explain it to them (not that anyone cares anyways).

The NSA developed and deployed a global system that enables them to do DPI on the whole internet traffic, analyze that traffic, inject traffic, attack every system through countless vulnerabilities and backdoors and all of that automated, not only against their targets but also against any infrastructure they are interested in.

They have secret laws, can force companies to work with them, force backdoors and not only are the US companies not allowed to talk about those things, they are legally bound to publicly lie about it.

So yeah they can hack every iPhone on this planet, and turn it into a silent listening device, among many many many other things, is that really what we should be talking about?

6
wyager 4 days ago  replies      
This is from a very old version of iOS (2007). We don't know if this is still true.

Regardless, I can say for a fact that there are exploits for all cell phone platforms. iOS exploits are by far the hardest to find. An iOS remote execution 0day will easily fetch $250k. I've seen one go for $600k. For an Android remote exec 0day, you're looking at closer to $50k.

Even if the NSA doesn't have these on hand, they can certainly purchase them.

20
Can-Do vs. Cant-Do Culture recode.net
321 points by minimaxir  2 days ago   128 comments top 14
1
zach 2 days ago 12 replies      
The economist who helped Walt Disney's theme park dream become what it is today[1] said that the most important thing he learned through it all was the profound difference between a "no, because" person and a "yes, if" person.

If you ask many people an audacious "Can we do X?" their response is usually along the lines of "No, because [valid reasons]". They're not wrong, but the basic attitude is to shoot down what doesn't seem to fit with one's own view of the world. These are "no, because" people, and big companies are often full of them.

Much rarer and infinitely more valuable, especially for an entrepreneur, is the person who hears "Can we do X?" and responds, "Yes, if... [possible solutions]". Their response is one of problem-solving instead of confrontation, seeking to find a synthesis of the new perspective and their own. It seems like a small thing, but it is a very significant shift in mindset. Thinking like a "yes, if" person can unlock so much potential.

A friend of mine, one of the most talented and knowledgeable game programmers around, could easily have shot down many of the ambitious ideas that came his way. Instead, he greeted them with enthusiasm, often saying, "It's software! We can do anything!" Wouldn't you like to set out to do amazing things with that person on your team?

[1] - https://d23.com/harrison-price/

2
austenallred 2 days ago 2 replies      
I love the comment from Robert Scoble:

"My friend Andy Grignon worked for Steve Jobs and was on a very small team building the original iPhone. Steve told him "sorry, you can not hire anyone who has worked on a phone before."

Why not? For exactly the reasons laid out here. He didn't want his team to find out what they were attempting to do was "impossible." Andy learned that when he went to AT&T to pitch them on what became visual voice mail. Andy and his team thought it was possible. The AT&T folks thought they were nuts. It took lots of work by Steve Jobs to convince AT&T to try."

3
abalashov 2 days ago 5 replies      
Ultimately, in 1842 English mathematician and astronomer George Biddel Airy advised the British Treasury that the Analytical Engine was useless, and that Babbages project should be abandoned. The government axed the project shortly after. It took the world until 1941 to catch up with Babbages original idea, after it was killed by skeptics and forgotten by all.

Is it not reasonable to suppose that it was an idea before its time, and useless in the particular historical context and implementational form in which it appeared?

There has always been utility for mechanical computation, but it's entirely possible that the world simply did not have an application for The Analytical Engine in the 1830s-1840s because other sectors of technology and the economy simply hadn't evolved to a level where they could effectively utilise it, especially given its physical properties--its size, scale, and energy consumption.

I don't know that for a fact, and can't effectively gauge the merits of my own suggestion, as I am neither a mathematician nor a competent historian of the intellectual, scientific and commercial zeitgeist of that period. But, for the sake of argument, is it not possible that this invention fell into the "interesting, novel, but useless" category?

Now, as for the telephone:

1) From the point of view of the telegraph establishment, it was a competitor;

2) Unintelligible voice really is useless. They just weren't far-sighted enough to see that the voice quality could improve, and indeed, it was a quite a long time before it did. Local loop quality improved first. Long-distance toll voice really didn't begin to sound good until digital trunking came along. Ask your grandparents or great-grandparents what coast-to-coast long distance phone calls sounded like in the era of analog lines and waveguide-type multiplexing technology;

3) In the heyday of the telegraph era, deploying lines was an extremely expensive and capital-intensive process, and it wasn't until other technological advancements that made possible various multiplexing and aggregation schemes (frequency-division, and later digital TDM) came along that the idea of running a copper line into every home really got to be realistic[1]. I agree that Western Union was a bit shortsighted in turning down this patent, but one could hardly blame them for thinking that universal telephone service wasn't economically possible. That's like selling a business idea today that relies on everyone having a 10 terabit fiber cable run to their home. Yeah, it's possible, and I have no doubt someone will make fun of me in a decade or two for naysaying it in any way, but would you invest $2bn in a related patent today?

What mistake did all these very smart men make in common? They focused on what the technology could not do at the time rather than what it could do and might be able to do in the future.

I don't disagree, but that needs to be fleshed out. No viable entrepreneurial venture can succeed solely on the basis of what it is logically possible for the technology to someday do, or what it could, in principle, in theory, one day. There is a need to realise a return in a usefully short period of time that is also unanimously acceptable to a coterie of investors with varying needs in terms of payoff time frame and patience.

Thus, you need a practical plan for getting to point B, making the technology do X. Even the most far-fetched, high-risk, R&D-driven ventures entail a proposal to concretely deploy and commercialise technology in a period that is usefully short and politically palatable, and that means everyone involved is somewhat constrained to what can be practically envisaged in terms of today's possibilities. One can make some leaps of faith, some intelligent extrapolations and some prescient forecasts, but ultimately, it's something expressed largely in the observational language and ontology of today.

Thus, I can't bring myself to fault someone for doubting, in 1995, that the consumer web is going to be what it is today, or even be what it was six or seven years later, in the early 2000s. It was possible--perhaps even reasonable--to suppose so, but would you have bet the farm on it? Your retirement savings? I'm not sure I would have (not that I have a farm or retirement savings, pero bueno).

[1] http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=451163...

4
crazygringo 2 days ago 0 replies      
Of course, too much of a 'can do' attitude can lead to completely unrealistic expectations, spread your resources too thin, and bankrupt a business or lead you to waste years or even decades of your life.

The smart choice is obviously a happy medium. Too much of a "can do" attitude is just as harmful as too much of a "can't do" attitude. We all need reality checks.

And this is why diverse teams and groups are so important -- one person says "of course we can't do", another says "of course we can do", and everyone hashes it out until they've come up with a realistic assessment that is neither clouded by overly optimistic nor overly pessimistic thinking.

5
mech4bg 2 days ago 1 reply      
That Alexander Bell quote sounded way too good to be real, looks like I wasn't the only person to think that, some interesting sleuthing:

http://blog.historyofphonephreaking.org/2011/01/the-greatest...

6
praptak 2 days ago 0 replies      
I don't buy this division into can - the good and cannot - the bad. It's just two strategies with different outcome distributions. The critic will be right more often, invest in boring tried ideas, earn less on average but with less variance. The enthusiast will often fail but when he's right against the common knowledge, he hits the jackpot.

And picking those jackpots and their critics ignores the majority of crazy ideas that indeed fail - "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."

Here's some criticism of a crazy idea that actually failed (CueCat): http://www.joelonsoftware.com/articles/fog0000000037.html

Obviously there is a lot of criticism for the sake of proving oneself smug. Unfortunately sounding smug does not automatically make one wrong.

7
jasonkester 2 days ago 1 reply      
I wonder how much of this is just a visibility issue.

We notice this same thing here on this site, where every new idea seems to get immediately piled on with negativity. The feeling is that it didn't used to be this way, and many of us old timers will remember a time when new ideas were mostly met with encouragement and constructive criticism.

But I bet if you look at the threads today and then, you might find that the absolute number of constructive, encouraging comments hasn't changed much at all. Rather, they are simply lost in a sea of negativity spouted out by the peanut gallery that seems to have washed in from other places where people dump all over tech news. We used to be conspicuously entrepreneurial here. Now we're a lot more representative of the tech world at large.

So yeah, I think that there are still plenty of people with the right entrepreneurial mindset out there. It's just getting harder to find them.

8
_delirium 2 days ago 0 replies      
I kind of wish the startup community looked like the picture painted in this post. :)

More audacity and innovation, less audacity-lacking "innovation" of the form, how to get users to click ads more often and exit this company for a multiple ASAP.

9
altero 2 days ago 0 replies      
A few years ago the iPad mania just started. I worked for harware company on admin software. It was written in Java, 15 years old (1998) and never had major rewrite.

All managers were like 'be like apple' and 'we must release iPad app' and 'PC is over'. Programmers on other side wanted to rewrite some critical parts, introduce automated tests and fix some very old bugs.

I was speaker for programmers, soon I became 'tablet hater' (kind of funny since I had Android tablet). Latter we even bough some iPads for developers to learn, those were locked in managers office :-). I left company shortly after that.

So for me 'Can-Do vs. Cant-Do Culture' is just sort of bullshit to mask real problems. Sure Jobs made iThinks, but he pulled massive resources towards the problem. Apple actually bought factories for touch screens before iPhone was made.

10
aetherson 2 days ago 0 replies      
I don't believe that quote that purports to be a Western Union memo on the telephone. I don't think that 19th Century businessmen put words like "idiotic" into business communication, and I don't think that they used phrases like "the technical and economic facts of the situation," as the word "economy" at that point was much more strongly meaning "being thrifty at home" and much less about economic systems.

This blog post claims that the quote is fake: http://blog.historyofphonephreaking.org/2011/01/the-greatest...

Slate says it "may" be fake and is awaiting verification: http://www.slate.com/blogs/business_insider/2014/01/02/why_p...

11
tlb 2 days ago 1 reply      
A fine editorial. Stirring. It has inspired me to not write off recode simply because 90% of what's on their front page today is crap.
12
fudged71 2 days ago 4 replies      
I see this all the time in the consumer 3D printing space. Sometimes high tech people act like laggards. "I can't make a metal part on my desk, so it's useless!" "okay, we're almost there, but how about you look at the progress in this industry and all the other applications that we CAN do right now!"
13
mrbrowning 2 days ago 0 replies      
He's making a good point in the abstract, but I think Horowitz is too close to the matter to understand that a lot of the negativity he cites is a natural reaction to the totally overblown rhetoric of the start-up scene. He inadvertently proves this by referencing such epoch-defining inventions as the telephone and the internet. Many tech start-ups are creating interesting, useful, and sometimes even novel products, but it's nonetheless annoying to anyone with a sense of perspective to hear from every angle that Start-up X is going to change the world by revolutionizing, you know, shoe-resoling.
14
joelandren 2 days ago  replies      
Let's also remember that there is valid criticism of startups and how they operate their business.

If a startup founder is an asshole, let's not excuse the behavior because they are building something worthwhile.

If a startup makes a mistake due to lack of concern about its users (i.e. Snapchat and their security hole), they should be criticized.

All told, I'm all about "can do" culture, but let's not use it as an excuse for boorish behavior or bad business practices.

21
The Lost Art of C Structure Packing catb.org
316 points by Tsiolkovsky  3 days ago   141 comments top 5
1
jandrewrogers 3 days ago 2 replies      
I do optimal structure (and bit) packing without much thought because it is an old habit. As the article states, I have noticed that the only other people that do habitual careful structure optimization these days have been doing low-level and high-performance code as long as I have. Most programmers are oblivious to it.

The reasons you would do it today are different than a decade ago and the rules have changed because the processors have changed. To add two clarifying points to the original article:

- The main reason to do optimal structure packing today is to reduce cache line misses. Because cache line misses are so expensive it is a big net performance gain in many cases to have the code do a little more work if it reduces cache line fills; optimal structure packing is basically a "free" way of minimizing cache misses.

- On modern Intel microarchitectures, alignment matters much less for performance than it used to. One of the big changes starting with the i7 is that unaligned memory accesses have approximately the same cost as aligned memory accesses. This is a pretty radical change to the optimization assumptions for structure layout. Consequently, it is possible to do very tight memory packing without the severe performance penalty traditionally implied.

What constitutes "optimal" structure packing is architecture dependent. The original C structure rules were designed in part to allow the structures to be portable above all else. If you design highly optimized structures for a Haswell processor, code may run much more slowly or create a CPU exception and crash on other architectures, so keep these tradeoffs in mind. The article is discussing basic structure packing which typically has easily predictable behavior almost anywhere C compiles.

2
anatoly 3 days ago 6 replies      
One trick that's not mentioned is unbundling the struct. Suppose you have a struct with a pointer and a character in it, and a huge array of those structs. If you resent the padding tax, refactor your code to use and pass around two arrays instead, one of pointers and the other of chars.
3
alextingle 3 days ago 9 replies      
Is this a "lost art"? I always consider the layout when I'm writing a C struct. It's the principal concern that governs the correct ordering of the members.
4
rwmj 3 days ago 2 replies      
He should mention this tool:

http://linux.die.net/man/1/pahole

5
drdaeman 3 days ago  replies      
This could be partially automated with `__attribute__((__packed__))` and a bit of -fipa-struct-reorg for better cache performance. Sadly, there's no any kind of `__reorder_yes_i_know_and_i_want_to_violate_c_standard__` attribute. But I really believe managing and optimizing memory layout (unless explicitly necessary, like when declaring serialization formats) should be compiler's job, not human's.
22
How Netflix Reverse Engineered Hollywood theatlantic.com
313 points by coloneltcb  2 days ago   129 comments top 10
1
smsm42 1 day ago 5 replies      
For all the high praise that gets heaped on Netflix for their brilliant technology, I have a feeling there is some other Netflix that is concealed from me.

I have been Netflix customer for years. I thought the idea was brilliant - super-cheap movies arriving whenever you want, what could be better?! I loved Netflix. Then I slowly discovered Netflix is running out of movies I want to watch - up to now where about 95% of movies I want to see are out. Then there was that streaming vs. DVS fiasco - and I stayed with streaming. But then I discovered there's nothing for me to stream. I thought maybe my tastes are weird - so I went to wikipedia and IMDB and looked "top X movies" - and most of them, of course, can't be watched on Netflix, except for those few that I've already watched long ago.

And that million dollar recommendation system? I've over 800 ratings, and I have hard time remembering last time their system suggested me something useful. In fact, the only reason I am keeping the subscription is because my wife has some series on her sub-account that she's watching. For me, Netflix has become almost 100% useless. So I wonder, with all the high praise to their brilliant data usage and innovative technology - am I doing something wrong? Am I missing some important part of Netflix that everybody else is seeing?

2
refurb 1 day ago 1 reply      
A friend of a friend works at Netflix and told me how they use some of this data.

House of Cards was basically a data driven production. Based on Netflix's customer preferences, they knew that a political thriller, starring Kevin Spacey and directed by David Fincher would maximize the number of views based on the habits of its customers.

It would appear the data was correct!

3
danielharan 2 days ago 1 reply      
Netflix's data allows it not only to recommend movies, but also to finance original productions.

Lots of businesses want "recommendation engines" to appease their cargo cult gods, few ask what possibilities their data really creates.

Sometimes data can make you better at delivering your service. Other times you can optimize inventory, enter entirely new lines of business or even obsolete your competitors.

4
eli 2 days ago 8 replies      
Haven't people gone to jail for scraping a URL and enumerating its possible values?
5
zheng 2 days ago 1 reply      
What would be really cool is if this list of genres was open-sourced somewhere. I can see Netflix not wanting that, but it would really save time for however many hackers read this article and decide they want the same data.
6
msg 2 days ago 3 replies      
At the top of the article is a Netflix genre generator. That is worth the price of admission all by itself.

But then there's a fairly entertaining look into what happened to content at Netflix after the million dollar challenge.

7
mixmastamyk 2 days ago 0 replies      
Meanwhile, their client still can't separate my daughter's kid shows from mine. It took them several years to implement profiles on iOS and then another to do it on Android.

Now implemented, "My Top Picks" last night were still dominated by My Little Pony.

Also would like to choose which shows she can watch, but the client doesn't support that. </complaints-over> ;)

8
shawnc 2 days ago 0 replies      
I find the part at the end about the Perry Mason aspect very interesting, and actually my favourite part of the article.

And the final sentence, feels like the real reason this was posted to HN: "And sometimes we call that a bug and sometimes we call it a feature."

Edit: Also, the 'Gonzo' genre of Post-Apocalyptic Comedies and Friendship seems it's got its first one in "This Is The End".

9
hershel 2 days ago 1 reply      
There's also jinni.com which has a similar system, not limited by UI issues and that can be used globally. Usually i get great recommendations from them , and they're fun to play with.
10
discardorama 2 days ago  replies      
How is this any different from that Pandora did with music?
23
David Cameron's Internet porn filter is the start of censorship creep theguardian.com
309 points by iamben  1 day ago   201 comments top 2
1
netcan 1 day ago 3 replies      
I'm really disappointed with this whole situation. The government. The parliament. The people & the media. I resent all the apologizing and explaining and the "don't attribute to malice" excuses. I also think that it's very possible that unrelated "side effects" like surveillance and control of the internet "media" have always been an intended unstated goal.

This article is right on the right track. This is an attempt to control the discussion, the definition of normal and public morality. It's is not a response to an actual problem. It's old fashioned conservatism and paternalism.

The bottom line here is choice. Parents everywhere have easy solutions for voluntary porn filters. You can have them set up by your the people you buy your internet from or the people that sell you your computer. It's cheap or free and it's available. I do not buy the "it's too complicated" argument. This is parents responsibility and it just ins't that hard to meet that responsibility.

2
bmj1 1 day ago  replies      
"Never attribute to malice that which is adequately explained by stupidity." (1)

As a UK citizen, I've been very disappointed by this debacle. I suspect that Cameron's heart was actually in the right place (protecting the children, etc) but he does not understand the significant number of unintended consequences that we are likely to see (and are already seeing).

I would suggest doing the following to make this workable long term:

- Centralise the list of sites categorised as obscene/pornographic/etc (why should it be different for different ISPs?)

- Make the list of these sites publicly accessible and searchable

- Ensure the list is maintained by a non-political and balanced panel (is this possible?)

- Implement a process for removal requests where a site is mis-classified and ensure that this appeal process is separate from the initial panel

- Implement KPIs on the effectiveness of the filter that take into account false positives + false negatives

- Remove any automatic categorisation based on keywords, this is too crude

- Make publicly accessible the guidelines for classification

Unfortunately, I don't expect the above to actually happen :(

1. http://en.wikipedia.org/wiki/Hanlon's_razor

24
Why does Google prepend while(1); to their JSON responses? stackoverflow.com
305 points by gs7  6 days ago   52 comments top 16
1
Stealth- 6 days ago 5 replies      
I think it's important to note that this is a bug that effects older browsers only. Modern IE, Chrome, and Firefox have security measures that do not allow scripts to capture values passed to constructors of a literal. That way, this hack is only needed for older browsers and will hopefully not be needed at all in the future. For more info: http://stackoverflow.com/a/16880162/372767

Also note that this attack, JSON Hijacking, is different than a CSRF (Cross Site Request Forgery) and has little to do with CSRF tokens.

2
tzury 5 days ago 0 replies      
There is a long discussion about this at

https://news.ycombinator.com/item?id=5168121

(from about a year ago)

3
frik 5 days ago 0 replies      
Chrome DevTools recognice while(1) and for(;;) in the network tab (JSON preview). Sadly, Firebug still doesn't know how to handle this and shows no JSON preview :(
4
andreyf 6 days ago 3 replies      
Does anyone know what browsers allow you to override the Array constructor? I was under the impression that modern browsers don't.
5
matchu 6 days ago 0 replies      
It looks like modern Chrome doesn't trigger setters when constructing from literals, so that's encouraging. http://jsfiddle.net/KY4Sa/
6
CCs 5 days ago 0 replies      
A good description: http://stackoverflow.com/questions/6339790/what-does-a-ajax-...

The idea: you need such workaround only if you return JSON Array.

Most of the API returns JSON Object in which case the attack does not work, it will result in syntax error.

7
robocat 6 days ago 1 reply      
Would introducing a syntax error into my JSON help prevent CSRF attacks? We don't use JSONP.
8
ciniglio 5 days ago 1 reply      
So does this solve the problem with using remote JS templates (advocated by DHH and 37s), what was outlined here [1]?

[1]: https://github.com/jcoglan/unsafe_sjr/blob/master/README.md

9
silon3 5 days ago 0 replies      
Is it correct to use the Content-Type application/json on this? IMO: not.

(I've just tested Firefox network view and it breaks the response display with syntax error -- there should be an option to select the format).

10
frozenport 6 days ago 2 replies      
What happens when you visit a malicious website and your computer gets stuck on `while(1)`? Syntax error would be better?
11
jbrackett 5 days ago 0 replies      
After seeing this I went to see if AngularJS had anything built in to mitigate JSON hijacking and they do. It will strip ")]}',\n" off of json responses if included from the server.

http://docs.angularjs.org/api/ng.$http#description_security-...

12
Kiro 6 days ago 2 replies      
Why doesn't this prevent CSRF?
13
frik 5 days ago 0 replies      
Facebook uses "for(;;);" as it's one char shorter.
14
homakov 5 days ago 2 replies      
Google is wrong IMO: there is no need to have such workaround. In rails we had similar problem https://community.rapid7.com/community/metasploit/blog/2013/... and fixed it by adding request.xhr? check on server side.

while(1) is ugly solution to currently non-existing problem.

15
dontdownload 5 days ago 0 replies      
It's the bot.
16
alixaxel 6 days ago 1 reply      
Smart!
25
How I reverse engineered my bank's security token valverde.me
294 points by valverde  1 day ago   62 comments top 19
1
jwr 17 hours ago 3 replies      
Think about it for a moment. He did all this (impressive) work just because the application that the bank provided sucked.

Now, once he writes a better app, what do you think the bank will do? Hire him (or buy the app), or fight him?

How much effort do we collectively waste because of moronic organizations that force their crap upon us, that we cannot escape from? (You can go to a different bank, but what if they all uniformly suck?)

2
lstamour 21 hours ago 1 reply      
This post had me guessing, but good work. First I saw the card with codes and thought you'd be showing that they weren't randomly created. But then you went on to the app -- and from the "What you'll need" section, when I saw the decompiler and the rest, I thought, "I know what comes next," but again I was surprised. You went above and beyond with the decryption of obfuscated error messages, etc. I could have guessed that it was OATH TOTP, as that's how these apps should work. Congrats on getting there from the source code, and indeed it's too bad they didn't retain compatibility with Google.

To fix the bug you mention -- root access from phone -- perhaps you could use something like Yubikey Neo loaded with ykneo-oath. I was searching the code for ykneo-oath (it's a java applet for the small key) to see where the timestamp was used for the dates, but it appears to be part of the YubiOATH app: https://play.google.com/store/apps/details?id=com.yubico.yub... So you'd have to modify the app source (it's on github). The advantage, however, is that your secret isn't stored on your phone and vulnerable to root apps. Instead, your secret is on a mostly-offline key inaccessible from your phone. There's a YouTube video on how it uses NFC to get that OTP from the Yubikey when you need it. In case you're somewhat extremely paranoid, this might interest you. :) For the truly paranoid, you've found a way to disable account recovery methods while mixing time-based and counter authentication mechanisms ;-)

3
nly 14 hours ago 0 replies      
Just another example of a proprietary implementation tweaking a de-facto standard / well-known algorithm (RFC 6238) just enough to be annoying.

Fresh in my mind is the Wii U controller reverse-engineering presented at 30C3, where the WPA-PSK handshake protocol was tweaked by performing bit-rotations on the resulting keys.

4
Vespasian 15 hours ago 5 replies      
While I don't know about the situation elsewhere in the world, here in Germany most banks retired the single use codes (called TANS or (if indexed) iTans) quite some years ago for being insecure.

Most online banking will now require a code created per transaction that is 1. either send to you via text on your mobile phone (and is thus prone to phone malware) or 2. is generated using an external device and the chip on your banking card[1] (a true two factor authentication). Both system will show you the exact details (target account, amount to be send) before confirming the transaction. A virus on the computer is not sufficient to hijack your account.

Just out of curiosity: What security measures do your banks employ and do they allow you to upgrade to a higher security level?

[1]https://www.ksklb.de/privatkunden/banking/chiptan/chiptan_fa...

5
fpgaminer 21 hours ago 0 replies      
Wonderful work, and thank you for documenting the experience. From the title, I thought this would be a story about decoding a banking website's cookies and gaining access to other peoples accounts, or something similar. I was quite surprised to see that your bank did basically everything right. I was also surprised that you went so far as to implement an embedded clone. Very cool!

P.S. Consider yourself lucky to have such a bank. Here in the U.S., our major banks do not take security seriously by any stretch of the imagination (they have little incentive to).

6
shocks 3 hours ago 0 replies      
Dark grey text on a light grey background. :(

Apart from this, awesome read.

7
memracom 23 hours ago 4 replies      
A good lesson for those of us who have had the idea of building a similar app to generate one-time passwords. Now we have a better idea of the minimum that needs to be done to build such an app securely. Thanks.
8
jrockway 23 hours ago 1 reply      
The only point of these token generators is to provide a stream of tokens, so that if the generator is cloned (which is trivial), that can be detected. That's it. As far as I can tell, this attack does not prevent the server from detecting a cloned token.

(To do that, you would have to install a new client on the victim's device that will increment its counter and tell you the counter when you ask.)

9
raverbashing 13 hours ago 0 replies      
Interesting

I suppose my bank token uses the same structure and produces a similar code (but I haven't reversed engineered it though)

10
StavrosK 16 hours ago 2 replies      
It looks like this is down, does anyone have a mirror? It's frustrating to read all the gushing comments and not be able to read the post!
11
r4pha 11 hours ago 0 replies      
A very interesting read. Also, I think I saw you on facebook's hackathon this year!
12
sajb 16 hours ago 0 replies      
Thanks valverde, quite interesting work, and very well written.
13
B0Z 20 hours ago 1 reply      
Article is 404 inside of 5 hours. That's fairly swift. (assuming OP didn't remove it himself)
14
easy_rider 23 hours ago 0 replies      
Well explained, nice read!
15
sebastianavina 23 hours ago 4 replies      
he is going to get a very awkard phone call from the bank...

Some years ago I stumbled with something similar on a webpage, posted it on reddit, and the next day the IT manager of the company called me... it was one of the most embarrassing days of my life.

Lesson: don't mess with other peoples work just because you can...

16
elwell 1 day ago 0 replies      
Wow, that's commitment!
17
piyush_soni 1 day ago 0 replies      
Just one word. Wow!
18
bblough 1 day ago 0 replies      
Nice work!
19
fiorix 1 day ago 0 replies      
dat hax
26
30C3 Recordings ccc.de
260 points by znq  5 days ago   38 comments top 6
1
hansjorg 5 days ago 0 replies      
Transcripts can be found here: http://subtitles.media.ccc.de/
2
madethemcry 5 days ago 1 reply      
I found a similiar posting on HN last year. I saved exactly 97 videos from 29C3. All of them with an interesting title. My brilliant plan: watch them over the year while traveling by train or plane. Maybe I read HN or slept but I watched not a single video. Now I have another ~100 great videos to watch. I really want to watch them all but I doubt it. I need a direct brain uplink.
3
3rd3 5 days ago 14 replies      
Which recordings do you recommend? (One per comment.)
4
weavie 5 days ago 5 replies      
Anyone care to summarize what this is about?

From what I gather these are 30C3 recordings from a CCC-TV website. The recordings have titles like FPGA 101 and Programming FPGAs with PSHDL.. There is no about page and the home page has further topics like, SIGINT13 video release, SIGINT12 video release and 28C3 webm release.

I'm confused..

5
Cyclenerd 5 days ago 2 replies      
10Gbit/s mirror (also offers ftp and rsync): http://ftp.halifax.rwth-aachen.de/ccc/30C3/
6
hydrogen18 5 days ago 2 replies      
Python script to download them all

https://gist.github.com/hydrogen18/8185934

27
A Short Story for Engineers txstate.edu
260 points by shawndumas  5 days ago   85 comments top 25
1
dkarl 5 days ago 7 replies      
I like the values that jokes like this reinforce (simplicity, creativity, and proactivity versus complexity, expense, and bureaucracy) but I wonder if they serve a positive purpose in engineering culture. Do we tell these jokes to keep ourselves on our toes, to make ourselves better? Are we really in danger of forgetting which is better, simplicity or complexity? When we create complex and over-engineered systems, is it because we forget that simplicity is better?

I don't think we do. I think we tell ourselves these jokes to contrast good engineering with bad engineering and to congratulate ourselves for being on the right side. A good joke would lead you down the garden path, encourage a bit of smugness and then rip the rug out from under you. This joke telegraphs the punch line from the start: it encourages smugness and then vindicates it. A healthy joke would make us uncomfortable about whether we would have been on the right side, whether we are doing a good job of living up to our values. This joke reassures us that the problem is other people's values, and by doing so, it promotes exactly the kind of complacency that it makes fun of.

2
HCIdivision17 5 days ago 4 replies      
My opinion has shifted over the last few years working in plants, and I've now settled on the idea that the fan solution probably needed the eight million dollar project. Without the project, the operator would not have been inconvenienced, nor would they have achieved their goals as soon.

Also remember that the project was worth it - it was returning on the investment. Ideally the simple solution would have been found first for a massive windfall of savings, but industry runs on constant, small, incremental changes over many years. And it takes a very special mindset to invent awesome hacks like the fan trick!

The operator should instead be applauded for making it so no other plant needs to buy such an expensive system!

Edit: also, never underestimate the utility of inconveniencing operators. They will find the most brilliant, clever, and cheap hacks to solve problems. Watching operators is the best diagnostic tool available. When you see a c-clamp or duct tape on the machine, you know exactly what needs workin' on next!

3
wikwocket 5 days ago 1 reply      
This is a cute story about over-engineering and thinking outside the box to find the simplest solution, but anyone with manufacturing experience can tell you that many factories have compressed air lines at each machine, and frequently use it to blow bad parts off off of a conveyor/feed rail.

American manufacturing factories are actually homes to tremendous ingenuity and practicality. To an outsider they may seem loud, dirty, and disorganized, but the engineers inside routinely deal with issues like "how can we catch bad parts before they roll off the line, using spare parts, scrap metal, and a $20 budget?" I have seen some amazing Rube Goldberg feeding systems that can outperform expensive laser/optical/diverter gate packages.

4
WalterBright 5 days ago 0 replies      
The engineers should be working alongside the factory line. That this often doesn't happen isn't always the fault of the engineers or management.

Back when I worked on the stab trim gearbox at Boeing, it came time to put it on the test rig and load it up. The test engineers gleefully told me they were going to bust my design. So joy for me, I got to go to the shop and get my hands dirty testing it!

By the time I got there, they had my baby all mounted in the custom test rig, with a giant hydraulic ram all set to torture it. There was some adjustment needed, and I lept forward to make it. The union shop steward physically blocked me, and said I was not allowed to touch anything. I was only allowed to give directions to the union machinist there, and he would turn a wrench at my direction.

Jeez, what a killjoy moment for me.

Anyhow, to make a long story short, when they loaded up the gearbox with the ram, the test rig bent and broke, and that lovely gearbox just sat there. Nyah, nyah, nyah to the test engineers and back to the office building for me.

5
mathattack 5 days ago 1 reply      
Great story, and widely applicable.

I worked on a very large process and technology improvement program for a Fortune 50 company. One critical piece of the project was a scheduling system for field technicians. After 100+ effort years (don't ask!) we got it developed and tested, and it achieved the 15 minutes per technician productivity improvement, justifying the massive expense. We then found that we could double the benefit by having them reboot their laptops weekly instead of nightly. (Though the technology architects screamed bloody murder)

6
SilasX 5 days ago 0 replies      
A cheesy, apocryphal story written like a forward from Grandma on a site that looks like it was stolen from 1996? How did it make the front page?
7
pmorici 5 days ago 0 replies      
This is like an engineering urban legend. I've seen it on here before but the circumstances were different. Last time this was posted it was a Japanese soap factory instead of a toothpaste factory.
8
juddlyon 5 days ago 2 replies      
Similar to the "Knowing where to put the X" story:http://www.engineering.com/DesignSoftware/DesignSoftwareArti...

Also, the NASA vs Russian space pen vs pencil.

9
spullara 4 days ago 0 replies      
This is one of the reasons the engineers at Tesla work on the factory floor. Take the tour if you can, it is great.
10
JackFr 5 days ago 0 replies      
In 1985 I worked in a factory on a line producing tubes of vitamin A&D ointment (similar packaging to toothpaste tubes.) The filling of the boxes with the tubes was actually done manually, I suppose because ointment is higher margin, lower volume.

We also produced foil packs (like fast food ketchup packets). That machine was the coolest mechanical device I've ever worked with.

11
southpawgirl 5 days ago 1 reply      
> and six months (and $8 million) later a fantastic solution was delivered

In real life the solution applied wouldn't be this one, nor the cheap fan, but some dude being paid peanuts to shake each box by hand.

12
codegeek 5 days ago 0 replies      
I have read this story before and it reminds of the phrase "Necessity is the mother of all inventions". What if that $8M project was never implemented ? The factory worker would then not need to manually go and remove the empty boxes. So one way to look at it is that the $8M project actually created a necessity to be more efficient and gave the guy an idea to not manually move the boxes by installing a fan which in turn solved the overall problem of empty boxes being shipped. May be he would have thought of all this without the $8M project but what are the odds ?
13
analog31 5 days ago 0 replies      
Everybody standing on the sidelines with no skin in the game is always proud to point out the engineer's mistakes after they have been made.

I comfort myself with Teddy Roosevelt's "man in the arena" speech.

14
seivan 5 days ago 0 replies      
I think most engineers are familiar with easy quick hack solutions that are cheap and fast. You want this to have an effect? Tell it to the product monkey overlords or the design "gurus"
15
ausjke 5 days ago 1 reply      
old story, it used to be a USA solution(high-tech, expensive) vs a Chinese factory solution(the fan added by a worker)
16
11thEarlOfMar 5 days ago 0 replies      
There are a couple of points that come to mind. First, management needs to be judicious about how problems get solved. Does it require committee? Or a lone actor? Which department should own it or should the CEO take it on personally? Second, there is no doubt that an organizational approach to problem solving is going to change as a company scales. The path the information took in this parable likely was from customer service to upper management to engineering. A CEO that will accept an $8M solution to such a problem is probably running a multi-billion dollar company. If this had been a $50 million company, no way he would have felt satisfied that it was money well spent.
17
Aloha 5 days ago 1 reply      
You'd expect the fancy scales to reject the empty boxes, but instead it appears they just sounded a bell. The workers added the rejection feature once they had an incentive to do so (the ringing bell).
18
johngalt 5 days ago 0 replies      
I think there's a similar story about Fedex being the highest throughput network provider.
19
loomio 5 days ago 0 replies      
For me the lesson here isn't as much about engineering as incentives and inclusion. If you engage people who are actually on the front lines in solving the problems, great ideas will emerge. These are the people who understand the problems best, and can be most motivated to fix them.

But in order to do that you have to effectively align incentives for them to solve the problems. If companies treat employees as disposable automatons, and do not allow them to share in the success of the business or benefit from improving workflows, they have no motivation for doing so.

So many companies shoot themselves in the foot by bringing in "experts" when the real experts are right there on their payrolls, but no one is asking them their opinions or creating a situation where they would be inclined to give them anyway.

20
dsugarman 5 days ago 0 replies      
how it is usually done in the fulfillment industry is a scale that changes the track if it is off weight by more than a certain percent (think of how train tracks work). The problem here is tougher than just a toothpaste factory because you can have multiple items in one purchase order and you have to make sure all items are in the box. Stopping the entire line every time something is off with 1 package is never a good solution. With pushing the packages into a 'problem' pile, someone can figure out what is wrong with each one and get things moving again on their own schedule.
21
coloncapitald 4 days ago 1 reply      
The story doesn't suggest that that the CEO or management staff should have thought of a fan before. It suggests that they should have probably looked into the problem better which may have involved visiting the production line and asking the workers how they would fix the issue inexpensively. Then probably one of them would have come up with this solution, or may be an even better one.

I see people bringing up points like "What if the fan dies?" or "what if the weight of the boxes increases due to extra packaging?". IMHO, these arguments are invalid because of the same reason. Fan is not the solution.

22
bowlfeeder 4 days ago 0 replies      
It's a nice story, but anyone familiar with mechanical feeding systems[1] could tell you air jets have been commonly used to reject parts for decades.

[1] http://en.wikipedia.org/wiki/Bowl_feeder

23
ttdan 5 days ago 0 replies      
Alternate take away: Visibility of key metrics/information (bell on expensive machine) is a strong motivator. Worthwhile when considering spending resources on things like creating informative dashboards and proper instrumentation to focus the a team on key metrics.
24
kimonos 5 days ago 0 replies      
Haha! Nice one! Thanks for sharing! Happy New Year to all!
25
lani 5 days ago 0 replies      
oooh !! 8 Mill !! I'd like that ..
28
Ask HN: What's your speciality, and what's your "FizzBuzz" equivalent?
248 points by ColinWright  2 days ago   330 comments top
1
crntaylor 2 days ago  replies      
I was a mathematician, and now work in finance (systematic trading). I've found a reasonable negative filter is

  A jar has 1000 coins, of which 999 are fair and 1 is double  headed. Pick a coin at random, and toss it 10 times. Given  that you see 10 heads, what is the probability that the next  toss of that coin is also a head?
That tests their ability to turn a problem into mathematics, and some very basic conditional probability. Another common question (that I don't use myself) is to ask what happens to bond prices if interest rates go up.

29
Show HN: Given an API, Generate client libraries in Node, Python, PHP, Ruby github.com
246 points by sunkarapk  1 day ago   73 comments top 22
1
gamache 1 day ago 3 replies      
Wouldn't it be wiser to choose a hypermedia format for the API, and then use a generic hypermedia client on whichever platform you like? Then you write just as many client libraries (zero), but the problem of pushing updates to your clients is solved as well.

Full disclosure: I wrote such a library for Ruby, called HyperResource.https://github.com/gamache/hyperresource

2
jxf 1 day ago 3 replies      
Just tried this out and Alpaca is an awesome tool. However, I'd never want to release this in production without tests. Alpaca doesn't generate tests, so you're back to maintaining the tests for your N different platforms/languages. But you'd have the same problem with its competitors, too; Thrift [0], for instance, doesn't generate tests either.

Overall, I'm not sure that the time savings is as big as it first appears, but I think it's great for quick projects.

[0] http://thrift.apache.org/

3
memset 1 day ago 0 replies      
This is cool. I would suggest that it would be very useful to have this kind of thing for JSON Schema [1], which is what I use with Python code to validate incoming JSON. (I was originally hesitant to using that, but since getting into it, I have yet to run into a use case which it cannot handle.)

There is also an RFC for "JSON Hyper Schema" which is intended to describe REST APIs. It doesn't have much library support in much of everything, but I am surprised that it hasn't taken off!

I like that this library is fairly opinionated (options for how to authenticate, supported formats, etc.) Though I worry that that creates a bit of inflexibility - for what exactly does "oauth" actually mean, there are always vagaries.

Neato!

[1] http://json-schema.org/

4
kimmel 1 day ago 1 reply      
I like the SPORE (Specification to a POrtable Rest Environment) approach better. You create a description file in JSON and each native language client can use that file to access the HTTP API. https://github.com/SPORE/specifications

SPORE already has clients for Clojure, Javascript, Lua, NodeJS, Perl, Python, and Ruby. I have used SPORE in a few projects and I was not disappointed. Another approach to solving the cross language library problem.

5
sunkarapk 1 day ago 2 replies      
Author here. I covered almost everything in the documentation. And there is also a small example which is hosted at https://github.com/alpaca-api.

Please ask if you have any questions. Thanks

6
codereflection 1 day ago 1 reply      
I have to admit, this is very reminiscent of "Add Service Reference" in Visual Studio, a capability to which I have grown to despise over the years. The code was almost always incomprehensible. I cannot tell you how much I loath seeing the comment at the top of a file "This was generated by a tool".

Having said that, this tool does look interesting. I hope that a goal is to always make sure that the generated code is as readable, and maintainable, as possible. Also, as mentioned by others, adding generated tests to the generated client libraries is extremely important.

7
mjs 1 day ago 2 replies      
Regarding bug reports:

> Guaranteed reply within a day.

That seems difficult to achieve--wonder how they're doing that? (Also, why??)

8
pjmlp 1 day ago 2 replies      
Can we please stop calling Web Services APIs?
9
jodoglevy 22 hours ago 0 replies      
Very cool, but why come up with a new API schema rather than use an open standard like OData (http://www.odata.org/)? Then Alpaca would be compatible with a bunch of APIs that already exist today. In fact, something like this (generating client libraries from APIs) may exist for OData already, but if it does, I've only seen it for .NET and OData (Visual Studio 'Add Service Reference').

This is actually pretty similar to a side project I've been working on called Gargl (Generic API Recorder and Generator Lite) (https://github.com/jodoglevy/gargl). Haven't gotten around to doing a Show HN post yet, but would love any feedback or to combine efforts. Basically it lets you generate an API for websites that don't have APIs publically available, by looking at how a web page / form submission of that web site interacts with the web server. You record web requests you make while normally browsing a website via a chrome extension, parameterize them as needed, and then output your "API" to a template file. Then this template file can be converted to a client library in a programming language of your choosing.

10
mikekekeke 1 day ago 2 replies      
Am I right in thinking about this like a WSDL, but based on JSON?
11
squar1sm 1 day ago 0 replies      
I think this is fantastic. @sunkarapk, great job. That it's written in Go makes using it so much easier. Wonderful.
12
johnnyio 1 day ago 1 reply      
Describing an API is not hard, but the API authentication method is. How do you think you will do it?

Edit : If you don't make oauth consumption simpler, you don't really solve the problem

13
acbart 1 day ago 0 replies      
So, this is markedly similar to the project I've been working on in grad school, only without static typing. http://research.cs.vt.edu/vtspaces/realtimeweb/Also, mine is explicitly geared towards education purposes. I'm about one third of the way thru version two, but I wonder if we can cross pollinate our code bases to get something even better.
14
endeavour 1 day ago 1 reply      
Does this use the Json Schema spec or have you reinvented the wheel?
15
abengoam 1 day ago 0 replies      
That looks fantastic, and seems to be in alignment with some things I have been doing lately (generating the server side controller of the API in Clojure + a set of documentation, from a set of definitions of API methods).Good job!
16
egonschiele 1 day ago 0 replies      
Useful, well-documented, TODOs right in the readme, and fast response for pull requests. I wish every open source project was like this.
17
mmccaff 1 day ago 0 replies      
Well done!

Know what I think would be really neat? If it could be pointed at an instance of Swagger-UI, or use the same discoverUrl that Swagger-UI would use, and spit out the libraries from that.

If you're not familiar.. https://github.com/wordnik/swagger-ui

18
elwell 1 day ago 0 replies      
Great idea! Including Obj-C would be very helpful.
19
flippyhead 1 day ago 1 reply      
I've been enjoying http://apiblueprint.org
20
notastartup 1 day ago 1 reply      
oh man you just killed a major feature of mashape.com
21
bashtian 1 day ago 0 replies      
I did something similar for Go and Java. It's simpler if you don't need the whole API, but of course not as powerful.https://github.com/bashtian/jsonutils
22
reklaklislaw 1 day ago 0 replies      
fwiw, I experimented along this line, dynamically generating python wrappers from yaml: https://github.com/reklaklislaw/rest_easy

It lacks documentation, a bunch of features, and parts smell pretty bad, but since the topic came up I thought maybe someone would find it interesting, if only vaguely.

30
Show HN: Use any text as a domain name github.com
244 points by daenz  3 days ago   138 comments top 11
1
TeMPOraL 3 days ago 6 replies      
I might be missing something, but I completely don't get this idea, especially with the examples provided by the author. I have two major concerns:

> Bind searches to domain names, eg "food in chicago" => f02970848a63988965aa40cd368ffcf9046209ca.com

This IMO is bad, and goes in completely wrong direction. We've invented search engines to have such phrases not bound to a particular domain. Who would handle the "#://food in chicago" domain? Would it be Google? Bing? Yelp? Local restaurant chain? Or maybe some scammers? And who would maintain the completely different website "#://food in Chicago", and why "#://Food in Chicago" wants to silently install me some malware?

The reason searching for such phrases makes sense, while having them as domains does not, is that things like "food in chicago" are poorly defined, fuzzy concepts. It would feel weird to change one letter in a query, or replace word "food" with, eg. "something to eat", and see completely different website. Moreover, major search engines are more or less egalitarian wrt. buisnesses. Yes, there's the whole SEO thing, but you can't get full control of what food joints are listed near your location just because you've managed to get the register first. I can (and do) trust listings from Google; they have both incentives and track record of being fair. I will never trust listings from random-autogenerated-squat-scam-business-site.

Which brings me to the second point,

> Good domain names are pretty scarce. It's a source of frustration for anyone who has ever tried to buy a domain.

Yes, they are, and the primary source of frustration is that they are mostly taken by various squatters and other scums of the Internet. What will happen is that, the moment there's any real possibility such hash-domain scheme is introduced, all those evil people and companies will take all the domains like "#://microsoft", "#://android" and "#://insert any popular keyword or phrase here" in order to sell them back to real businesses for boatloads of money. And then we'll be back to square one, with maybe a little bigger domain space than we have right now. Bad people win, good people loose and nothing changed.

So, again, the concepts behind this idea elude me.

2
arn 3 days ago 4 replies      
Reminds me of "RealNames". Dot-com era company. $130 million in funding.

http://en.wikipedia.org/wiki/RealNames

RealNames was a company founded in 1997 by Keith Teare. Its goal was to create a multilingual keyword-based naming system for the Internet that would translate keywords typed into the address bar of Microsoft's Internet Explorer web browser to Uniform Resource Identifiers, based on the existing Domain Name System, that would access the page registered by the owner of the RealNames keyword.

3
Patrick_Devine 3 days ago 4 replies      
The is a horrendously horrible idea, for the same reason why unicode domain names are a bad idea. Domain names are important because they provide a reasonable amount of trust. If I type http://apple.com, I'm 99.99999% certain that I've connected to Apple's website. This gets nasty with unicode, because a person can spam your email account and get you to click on a URL which looks very similar to something like apple.com, but really points to a malicious site (thank you cyrillic characters).

Hash based domain names would be even worse. You have no idea what site is lurking behind some big string of hex digits. You could argue that a person should just compare the hash to some known set of hashes, but that's a. cumbersome and b. unrealistic. If it's done by humans, it's error prone (a malicious site could spoof the first few chars to point to their site), and if it's done by computers, what's the point? You've now effectively created a really shitty replacement for DNS.

4
colmmacc 3 days ago 1 reply      
This mechanism is half way to a suggested scheme for domains that are less vulnerable to single-actor take-downs, posted here on HN a few days ago; https://news.ycombinator.com/item?id=6964090 .

In short; instead of merely mapping to [hash].com the extension could map to [hash].com, [hash].se , [hash].ly, [hash].is, [hash].ch and then use a quorum consensus of whatever answer 3 or more of those names agree on. Effectively each TLD registry (and each of your registrars), along with their regulatory environment, would lose the ability to take down your name without international agreement.

For certain niches, such a feature might be a good enough value proposition to ordinary users to convince them to install an extension.

Other observation; 36-ary is probably a better encoding for the hash data than hex. DNS isn't great with lengthy answers and every byte is worth conserving. But it's cool to see something interesting like this in the form of a browser extension.

5
splatzone 3 days ago 1 reply      
This is really cool. This is my favourite kind of idea; one that changes something fundamental in a succinct way.

This makes me wonder how important domains are at all. My mum never even thinks about the domains for the websites she visits, she just types in 'ebay' and Google does everything for her.

The only time I think about URLs (outside of coding) is when I have to share a link with someone, but I wonder if even that could be replaced with a sufficiently advanced search engine.

6
pudquick 3 days ago 2 replies      
I would be in favor of this idea if a little more thought was put into how the hashing function works.

As it stands, someone typing in:

food in Chicago

will get a different URL than:

food in Chicago

And the same goes for: Chicago food, chicago food, food near Chicago, etc.

Every one, with a single character difference (extra space, different word order, capitalization difference, regional spelling like theatre vs theater, etc) will result in a different hash.

You've now made 'humanized URLs' into 'no one will guess your domain'.

It's an interesting approach to avoiding search engines, but it doesn't solve the problem that search engines do solve: multiple similar but different entries resulting in the same "appropriate"/top website result.

With this approach, not even face book, Facebook, and facebook would result in the same .com (and please don't suggest just purchasing a billion domains and redirecting them all).

7
gbog 2 days ago 3 replies      
This is quite an interesting experiment, and a step in the good direction, which should and must lead us to the complete removal of domain names.

Why are domain names bad? That should be obvious.

The main symptom of domain names' inherent brokenness is that the law must patch it so that an unknown squatter cannot kidnap some domain, e.g. register "france.com" and ransom it to the people who are most qualified to claim it. This is ridiculous: the squatter should not have had the opportunity to squat this (I know it doesn't apply for "France" but it applies for many other names). Nowhere in the world we see kids grab the seat in front of the fireplace and not let their grandma have it.

Moreover, what if a single ascii string refers to two different things equally claimable by two groups of people? E.g. what about "francfort.com"? Which city would it refer to? A contrieved case: If Chinese people chose a translitteration scheme for their language so that "google" would mean China, wouldn't they have some rights over google.com?

This level of brokenness is not even because of a leaky abstraction, this is a sunken boat everyone has to use to cross the river.

On a more philosophical stance, domain names are wrong because they build a kind of universal language out of nowhere, grounded on nothing, whitout any kind of democratic digestion and acceptance by human beings. We could have a universal language shared by all humans, but it would be a very long process of slow acceptance, with a percolation through all societies all over the world. In this way, there would be many adjustments, reverts, and eventually we would come up with a set of names that is good enough, but right now domain names are just a musical chairs game that is ridiculous and must be stopped.

So I think using a string's hash is a nice step, because it starts blurring the domain name.

However, I think a much better scheme should be to use the hash of the page content as the domain name. In this case, once the hash is determined, who cares where it is, who care which domain it has? It would just be way to download the content. And the job of search engines would be to point us to these hashes.

And dynamic content you tell? Which dynamic content? Does one really care about changes in wiki pages? And for the twitter feed, each tweek is a fixed content snippet, and the javascript fetching them is also fixed, or could be a browser extension. And an up-to-date search engine would have the latest hash for a keyword such as "twitter" or "The Guradian".

A nice side-effect would be that domain name based censorship would become ineffective. And downloading content could be just some p2p checkouts.

8
chavesn 2 days ago 0 replies      
My initial thought: This is terrible, because how would we ever know which sites we can trust? Something as tiny as an extra space would change the hash value.

But on second thought, the real problem is that we (the web technology community) have assumed domain names are even a remotely suitable proxy for trust. I don't think most common web users actually get this point. That's why phishing is so easy (except for the part about getting a phishing email past spam filters).

Do you think most people really know (or notice) the difference between webaccess.bankofamerica.com and webaccess.bankofamerica.x8.co? I doubt it.

So the real fix for this situation is creating a true trust system that most actual end users can understand and rely on.

Then, it seems only natural for something like this to be the future. UUIDs will act as the underlying addressing technology with "whatever you want" as your display name.

And as a bonus, it will really cut down on the cybersquatters' profitability.

9
jrochkind1 3 days ago 1 reply      
To the extent that TLD's are namespaces of hostnames, it's just adding the equivalent of another TLD, but implementing it as a weird proprietary extension on top of .com.

Why?

10
glesica 3 days ago 0 replies      
Fun story, some of the older (hehe, older as in "older than 30 or 35) people here might remember that in the 90s there was a startup that did exactly this but with a browser plugin as I recall (well, not exactly, they didn't hash the text, but they took arbitrary plain text and mapped it to a URL). The idea was to sell short sentences to companies, so "seinfeld TV show" (or whatever) in the address bar would have gone to the NBC "Seinfeld" web site, etc. I think the idea was to make "deep" links easier for people to remember, but I don't remember the details, I just remember that it existed and I probably read about it in PC Magazine.
11
jyap 3 days ago  replies      
This won't work because to me this is similar in concept to URL shorteners. The difference being in the shortening algorithm and the use of expensive domains.

So for example with current shorteners you have:

http://shorturl.com/{algorithm for unique URL goes here}

In the above case using a browser plug in can also eliminate any server side resolution of domains.

With this proof of concept:

http://{algorithm for unique URL goes here}.com

... Except the implementation costs $ if it is to be accepted... And to be accepted it needs to have a benefit that isn't solved by URL shorteners.

       cached 5 January 2014 03:11:01 GMT