hacker news with inline top comments    .. more ..    21 May 2014 News
home   ask   best   5 years ago   
The greatest bug I never fixed (2010) makandra.com
177 points by triskweline  2 hours ago   36 comments top 10
kemayo 2 hours ago 1 reply      
Blizzard added a function to get around this, in response to this sort of chat-tunneling: SendAddonMessage [1]

It stops the drunk text-transform, and also doesn't have to worry about hiding the text from the channel you're trying to talk into.

Overall, Blizzard has been very good about adapting to what their addon community is trying to do. They add official support for hacks if they like what the addon does for the game, and deliberately break some if they don't like its effect.

[1]: http://www.wowwiki.com/API_SendAddonMessage

patio11 1 hour ago 1 reply      
The author is the principal of the company which produces RailsLTS, which I was involved in as a customer. I wasn't aware that we shared the WoW connection, but that makes me like them even more. (I sort of hope we do not need an advisory about RCE via session cookie tampering because Rails is drunk.)
twic 2 hours ago 0 replies      
So getting drunk makes it harder to make new friends? Oh computers, you so crazy!
_asciiker_ 2 hours ago 0 replies      
This is one of the most clever ways to blame a bug on booze I have ever read!
markbnj 2 hours ago 10 replies      
That's a great story. How would you have fixed it without having to find another channel to transmit over? Could the plugin sense the character's condition and delay until it abated?
esquivalience 2 hours ago 0 replies      
I wonder if this could be exploited by registering usernames that accord with the errors? The user would[0] then be trojanned in to people's friend list with a presumed level of trust they did not deserve.

Not sure if this would have any value, but I'm sure someone enterprising could find a way to exploit others trust. That's not a new concept.

[0] (This of course assumes that the only lisped-up content is the usernames, not the whole syntax, which I think is an acceptable assumption given that it's fully out of date anyway)

legacy2013 31 minutes ago 0 replies      
I wish I had thought of this when I was trying to develop a WoW Addon. I was building an advanced party gui and wanted to communicate the whereabouts of each user, but was stmyed about how to send information between each instance
akc 14 minutes ago 0 replies      
So you're saying it's a lossy format.
chris_wot 1 hour ago 1 reply      
I wonder if there would be a way of sending drunk text that forces it to sober text?
superduper33 2 hours ago 0 replies      
Guide to Landing Page Optimization moz.com
74 points by ra00l  2 hours ago   6 comments top 4
bhartzer 1 hour ago 0 replies      
I totally agree, never send people to your site's home page. Unless your site's home page is a landing page.
sayemm 57 minutes ago 0 replies      
This is a great list of tips, thanks for posting.
mrfusion 46 minutes ago 2 replies      
Are there any simple ways to build a simple landing page for a product idea I'd like to test?
ra00l 46 minutes ago 0 replies      
For the last few years, I've read a lot on conversion optimization articles.

Reading this, summarizes 90% of the tips and tricks that I learned. Glad you HNers enjoy it!

Stealth Infrastructure vvvnt.com
38 points by hxrts  2 hours ago   8 comments top 6
dmix 20 minutes ago 0 replies      
> Were AT&T customers and their partners on the border with Canada being routed through police hardware? Having built and programmed my own BTS I knew this to be technically possible. If so, I had found a Man in the Middle, where state law enforcement was masquerading as civilian communications infrastructure.

Interesting. I'm curious if this is always the case near borders and in airports. NSA/CSEC leaks have shown they are vacuuming up all cross-border roaming alert SMS texts.

tehwebguy 0 minutes ago 0 replies      
That was a great read!

So, was it a MITM BTS?

knowaveragejoe 29 minutes ago 1 reply      
I've always been fascinated by this. Apologies if if it is linked somewhere in the article that I missed, but where are the forums where radio enthusiasts share hidden towers they've found?
dm2 1 hour ago 1 reply      
One of your images is 10 MB, it probably doesn't need to be that large while in the middle of an article, maybe a link to the full-size version instead?
hxrts 58 minutes ago 0 replies      
You can also find an interview with Julian Oliver here: http://vvvnt.com/media/julian-oliver
irsneg 28 minutes ago 0 replies      
Blocked from Russia :/.
Principles of high-performance programs (2012) libtorrent.org
36 points by arunc  2 hours ago   discuss
ElasticDump Import and export tools for ElasticSearch github.com
27 points by evantahler  1 hour ago   2 comments top
evlapix 1 hour ago 1 reply      
I haven't used this yet, but it seems like it should make updating mappings much easier. See the "using aliases for greater flexibility" section in this article: http://www.elasticsearch.org/blog/changing-mapping-with-zero...

I had to write the scripts to implement this concept myself and it wasn't a quick and easy task. It would had gone along a lot easier if I was able to abstract away some of those queries with a tool like this.

Netflix to expand to Germany, France and Switzerland bbc.com
52 points by nkurz  2 hours ago   47 comments top 7
jordigh 46 minutes ago 5 replies      
I find it so ridiculous that an internet company has to "expand" to another country.

Why can't it just be a matter of removing the geoip checks from your servers?

(I know, I know, still... the internet was not meant to be this way.)

frik 27 minutes ago 0 replies      
The title is incomplete/misleading - six European countries, not just 3:

  Germany, France, Austria, Switzerland, Belgium and Luxembourg

Ecio78 1 hour ago 1 reply      
Netflix also must contend with the fact that French audiovisual laws require local broadcasters to invest significant sums in domestic content. However, Les Echos newspaper has suggested Netflix might get around this by basing the service in Luxembourg.

Actually Netflix is already based in Luxembourg for its European activities and there have been news that they're going to move to Netherlands: http://www.wort.lu/en/business/move-to-the-netherlands-netfl...

grosbisou 1 hour ago 0 replies      
> "More likely, we'll figure out some stuff's working, some stuff's not; we'll adjust the formula."

I absolutely love this mindset from Netflix. Every conf talk, article I hear from them they talk about stuff that didn't work, that did work for a while but didn't scale. Then show what worked better.

dan_bk 1 hour ago 3 replies      
Well, it remains legal to download music and movies in Switzerland (PirateBay & Co), so there's that.
erikb 56 minutes ago 2 replies      
I'm a German and want to say there is no real competition here. As far as I'm concerned we are waiting desperately for a reasonable streaming service! Even watching in English would not be a problem as long as we could have Netflix! There are people here paying VPN and Netflix, so money should not be an issue.
w1ntermute 1 hour ago 7 replies      
> while French TV has a lot of subtitling - in Germany foreign language movies and TV shows are generally voiced over

Can anyone explain why this is? I hate watching dubbed video, especially when it's live action.

Show HN: PressureNet The Weather's Future pressurenet.io
27 points by cryptoz  1 hour ago   9 comments top 4
mturmon 41 minutes ago 1 reply      
From the top of the about page (http://pressurenet.io/about/) -- "Our mission is to dramatically improve weather and climate forecasting."

How does this have to do with climate? It seems like your time scale doesn't overlap with typical climate models. I guess it feels like a speculative claim (better pressure measurements -> fine-grained model improvements at small temporal scales -> better climate model outcomes).

Also, I wonder if you have a link to a paper or presentation that details how these measurements could fit in with the assimilation models that are used in weather forecasting? I see a link to Cliff Mass' blog (as a whole), but I'm more interested in a specific reference. In particular, I wonder if it's possible to quantify how much a perfectly-accurate ground-level pressure field could constrain upper atmosphere dynamics. Has there been a session at AGU (http://fallmeeting.agu.org/2014/), for example, examining how this could work? Or is it too new for this yet?

danparsonson 1 hour ago 1 reply      
Love the idea; naive, noobie question: since your sensor density presumably depends very much on population density, how does this affect your model? I was under the impression that regularly-spaced measurements were important but presumably not so much? Or is this still better (or will be eventually) than the national network?
cryptoz 1 hour ago 0 replies      
Oops, the API page has a link to documentation that's 404ing right now, we'll fix that ASAP. Here's the file: http://apps.cumulonimbus.ca/PressureNetDataAPI-v2.pdf
beardicus 1 hour ago 1 reply      
404 on the "Read the full documentation" link under "API" on the API page. (Edit: I see you already found the broken link and the file is yonder http://apps.cumulonimbus.ca/PressureNetDataAPI-v2.pdf )

Do you have any examples/links to research being done with the data currently? I poked through the blog quickly and didn't see anything in depth.

Go Read: One Year with Money and App Engine mattjibson.com
99 points by kasbah  5 hours ago   38 comments top 16
alec 2 hours ago 1 reply      
Interesting writeup, thanks.

I'm a user and have been negatively impacted by the feed fetching optimizations - daily feeds are often a few days behind and come in bunches. Two examples:

- Penny Arcade updates its comics Monday, Wednesday, and Friday, always at 7:01AM UTC, and then news other times during the week. It's Wednesday at 4:25PM UTC - 9 hours after - and goread hasn't picked it up.

- Dinosaur Comics is updated weekdays. I'll eventually get all of them, but usually two or three at a time. For example, yesterday I marked all my feeds as read; today, I have entries from Monday and Tuesday, but not from Wednesday.

I had hoped that the move to the everyone-pays model would give you the resources (either developer or quota) to fix these issues, but they've gotten no better or maybe worse.

I haven't looked at what you're doing, but I believe Google Reader used pubsubhubbub where available to reduce/eliminate polling for many popular feeds.

I honestly didn't have a great experience with my last bug report, so I haven't tried again.

tantalor 8 minutes ago 0 replies      
> number of users is not a factor for Go Read

You could charge users a fee per feed proportional to server cost (e.g., frequency of posts) and inversely proportional to the number of subscribers.

* Unpopular/infrequent feeds (like my friends' blogs) would be free

* Popular/infrequent and unpopular/frequent feeds would be cheap, maybe $1/year/user/feed

* Popular/frequent feeds would cost more, maybe $10/year/user/feed

This way you can peg your income to an exact multiple of your costs.

BitMastro 2 hours ago 2 replies      
I would like to thank you for creating go read and making it open source, it's what I use as RSS reader, using my own free quota of App Engine. I hope I can contribute back some day.

  "30-day trial: This action cost me about 90% of my users. Many were angry and cursed at me on twitter. I agree that it is sad I did not say I was going to charge from the beginning, but I didn't know that I would be paying hundreds of dollars per month either."
Honestly I think the sense of entitlement nowadays is way too high, people use a product that is free and needed weeks or months of your own free time, and then complain about it when you change it.

If you decide to charge for it, you're a greedy bastard, instead if it's free, they say "if you aren't paying, you are the product". Other complain that the product doesn't work, when instead it's a case of PEBCAK. When it's not, it means you're going to say goodbye to a couple of night's sleep or a weekend or too, or maybe it's ONE feature away from being perfect (again).

...sometimes I hate people :(

jshen 3 hours ago 1 reply      
"Second, it is impossible to meet Google's terms of service with an RSS reader. They dictate that ads may not be shown along adult or copyrighted content. Determining that for external sources of data was not going to happen. I got a few emails about violations and added a system to prevent ads on certain feeds. But I was playing whack-a-mole and it would never end. Eventually I made a mistake and they banned my site. I gave up on ads at this point."

I've experienced this myself, and I'm hearing it more and more from others. Maybe this is a market need that is going unfulfilled.

hueving 23 minutes ago 0 replies      
>Again, it's free software so I'm not sure what the problem is. I learned from this experience that some people will never pay, and some will. I'm not opposed to alienating people who won't support all of the time, effort, and money I put into this product.

It's always surprising to me when devs are surprised by outrage at the change/removal of a free product.There is a non-negligible cost for a user to research/choose/setup/learn a new tool. In this case, a feed reader has favorited articles, read/unread state of articles, etc. When you pull the rug out from users that have made that investment who now have to start over, they are going to be mad, regardless of what they paid.

programminggeek 2 hours ago 1 reply      
I'm actually a little surprised it costs so much to run on app engine. Given that you've settled into a more predictable user model in terms of costs and probably growth, why not lower your costs by investing in dedicated server space or possibly reserved instances on AWS (unless there is a app engine equivalent).

Without knowing what your server load looks like, I would imagine you could save a couple hundred dollars a month in hosting, which would go right to your bottom line profit. A couple hundred dollars a month isn't huge, but at this point in your business that's say $2,400 a year. From the looks of it, that's at least 2-3 months worth of revenue or almost 5 months worth of profit.

I think it's at least worth considering with where your project is at right now.

SiliconAlley 2 hours ago 0 replies      
Congratulations on an income-generating project! That's an immense achievement. Perhaps the infrastructure cost is worth the headaches you spare yourself by being on GAE but have you looked at dedicated infrastructure from the likes of OVH/Hetzner (on both providers you can get ~2TB storage/64GB mem boxes for around $100/mo)? I moved all my projects to a single OVH box and it's been a champ. It does mean slightly more maintenance burden (ex you need to roll your own backup/restore plan). Perhaps it is a poor fit but that seems like a possible path to get those costs way under control
grey-area 2 hours ago 1 reply      
I wonder if the author has costed out how much running this app would cost using a VPS provider like Linode or Digital Ocean? You can get a lot of resources for under $100 a month and the needs of most apps are pretty basic (file storage+db+web server). Given the memory requirements and speed of Go, it shouldn't take much hardware to serve a lot of requests with a few go instances behind nginx say.

I'm not convinced that being locked in to one cloud provider's services and APIs is healthy long term - it means you are locked in to that ecosystem and it's harder to consider alternatives, even if your needs are quite straightforward, so you can end up in a situation where you're paying hundreds of dollars a month for hosting when you don't need to be.

revorad 2 hours ago 0 replies      
Congrats Matt. Well done on testing various revenue options, instead of eating the costs just to be the "nice guy", only to have to shut it down eventually.

Ruben Gamez of Bidsketch had a similar story about switching from freemium to paid only -


curiousDog 1 hour ago 1 reply      
From the article:

"A simple rule in computers is to make something run faster, have it do less work. I remember reading about how grep works quickly. Instead of splitting a file by lines and then searching for the string, it searches for the string, then finds the newlines on either side. Thus, if a file has no matches, it won't ever have to do the work for line splitting. Done the naive way, it would always split by line even if there was no match. Do less work."

Good observation, but I doubt if that's remotely even the reason why grep works quickly.

bueno 2 hours ago 0 replies      
Great read.

This is especially interesting to me. My side project http://www.longboxed.com was recently launched to a modicum of regular users (~300). My app runs on Heroku on the free tier with the 'Hobby Basic' level of the Heroku Postgres database. All told it costs me ~9 dollars a month. No big deal.

However, if I ever stepped the site up to another tier I'd be looking at ~50 bucks for the database and ~36 bucks for another process instance. These expenses can add up fast for a site that doesn't currently generate any money.

Anyway - it is nice to see examples of introducing a pay model into your app after it has launched.

psankar 1 hour ago 0 replies      
A long time user of your app, but from my own quota of appengine.

Thank you for such a lovely product. Someday I would like to contribute to your project(s) more.

zura 3 hours ago 4 replies      
Btw, what are some good options nowadays for [almost] free website launch? Until/(if ever) you get "big" of course...
simonbrown 3 hours ago 1 reply      
Did you consider grandfathering existing users in?
rootuid 3 hours ago 0 replies      
Nice article. Thanks for sharing!

btw: Lots of typos in the article, be sure to spell check !

couchand 3 hours ago 1 reply      
Open Football Data openfootball.github.io
61 points by vinhnx  4 hours ago   31 comments top 10
cabbeer 3 hours ago 2 replies      
Does anyone know if this is available for (American)Football?
sourc3 2 hours ago 1 reply      
You, my friend, are the best. As a huge soccer fan and a developer, getting this sort of data is really hard unless you shell out hundreds of dollars a month.

Already thinking about the apps that will use this! Thank you.

llimllib 3 hours ago 0 replies      
I did something pretty similar, but it seems definitely less comprehensive: https://github.com/llimllib/soccerdata/ . Will be using this, thanks!
ddispaltro 2 hours ago 1 reply      
Is there an open database for horse racing?
redshirtrob 1 hour ago 0 replies      
This looks cool. I see Gold Cup and NA Champion's League repos. Is there a plan to add MLS data? I know some people who would be super excited to get baseball-reference.com level data for MLS.
abeisgreat 3 hours ago 1 reply      
I'm curious if this data is actually public domain. Where are they sourcing it from? Are they legally allowed to redistribute? Etc.
ngoel36 2 hours ago 1 reply      
Is there anywhere to get real-time play-by-play data?
ntietz 3 hours ago 2 replies      
This is really cool! Does anyone know if there are similar datasets for other sports out there? Even less clean datasets, as long as they have permissive licensing to allow sanitation and republication.
chevreuil 1 hour ago 1 reply      
The data format bothers me. Why not use a standard one like JSON?
rurabe 2 hours ago 0 replies      
perfect timing. thanks!
Remembering, as an Extreme Sport well.blogs.nytimes.com
18 points by cpeterso  2 hours ago   5 comments top 2
was_hellbanned 4 minutes ago 0 replies      
For memory techniques, including memory palaces, I can recommend "Quantum Memory Power" by Dominic O'Brien. He was (is?) a world champion at memorization, and does pretty well at presenting in his audio book. Some of it feels a bit out of date (e.g. popular names he uses to help with memorization), but if you can get past it there's a lot to learn.

"Tricks of the Mind" by Derren Brown also has some basic information on memorization, along with a host of other interesting topics.

Monkeyget 1 hour ago 3 replies      
If you are interested in the topic I recommend the book Moonwalking With Einstein mentioned in the article. It's both entertaining and informative.I particularly liked the parts about mastering a skill as efficiently as possible.

I tried some memorizing techniques. It's a great feeling to improve 10x at a (specific) memorizing tasks in very little and effort time by using one weird trick.

I used to try to remember random words during my commute and recall them when coming back. I took a list of random words from /usr/dict and set myself a timer of a few minutes to remember 30 words using memory palace. I used various places from the commute itself to store words. It was fun to see the words pop-up automatically in specific places on my way back. My performance declined after a while because I wasn't able to erase the previous words.

Haskell Packages for Development wunki.org
27 points by lelf  3 hours ago   8 comments top 3
cordite 16 minutes ago 1 reply      
It still takes a long time when sandboxing if you're on a project using yesod and you get to re-build the entire dependency set for each project.
mlitchard 2 hours ago 1 reply      
Greg Weber gave an awesome talk at BayHac 2014 on just this subject.https://docs.google.com/presentation/d/1suMuLRo1xS5NxWn-L9lG...
moomin 1 hour ago 1 reply      
Of course, when I last tried installing stylish-haskell (last week), it insisted on taking non-compatible/broken dependencies. Yes, on a clean environment.

When I learn more Haskell, I may succeed to figuring out what's wrong.

Summers on Piketty democracyjournal.org
27 points by marojejian  2 hours ago   28 comments top 5
bsbechtel 1 hour ago 7 replies      
Does anyone think his solution of a global wealth tax will actually help lessen inequality? This solution seems simplistic and naive to me, and puts those at the bottom of the ladder at the mercy of these 'global wealth tax collectors' instead of the wealthy. I don't see much difference in these two, other than one group is at the mercy of market demands, and the other is supposedly at the mercy of a democratic populace. Correct me if I'm wrong, but he didn't write anything about actually empowering those at the bottom of the economic ladder, did he? Where are those solutions being offered up?
acr25 12 minutes ago 0 replies      
Ah, I remember summers on Piketty. The fresh seafood, the smores, the fireworks. Ferry horns in the distance. And, oh yeah, the global wealth tax.
carsongross 1 hour ago 1 reply      
The elephant in the room, of course, is Mr. Summer's object of desire, the Fed.

Until the Fed is dealt with and the financial sector is subject to true market forces, it will continue to dominate the economy and produce destructive malinvestment.

selmnoo 1 hour ago 1 reply      
> So why has the labor income of the top 1 percent risen so sharply relative to the income of everyone else? No one really knows.

Really? Isn't this what the bigger half of the book is about? It's because of rentier/patrimonial-capitalism, opportunity not being equally distributed in the system, etc.

alexeisadeski3 40 minutes ago 3 replies      
At a time when taxes are obscenely high - at least in the US - the discussion should be of massive cuts, not new taxes.

Alas, I am not optimistic. People have suffered for generations under horrific conditions. No reason it can't happen here.

Or to be more precise: No reason it can't happen here again. Don't forget slavery and Native Americans.

Llanfairpwllgwyngyll wikipedia.org
6 points by DanielRibeiro  36 minutes ago   3 comments top 2
ghayes 10 minutes ago 0 replies      
> The long form of the name was invented for promotional purposes in the 1860s.

So this was done as a marketing stunt, just a long time ago.

ColinWright 13 minutes ago 1 reply      
No idea why this is here ...

Anyway, this is one of the first things I learned to say and spell when I first moved to the UK. The full name is Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch, although read the page to see the history.

Curious, yes, but ... ?

Anyone who's interested, I will later this year be giving some math workshops and a lecture later this year in Bangor. I can probably get invites for guests if anyone is interested.

Good Poker Players Aren't Lucky bloombergview.com
52 points by andrewljohnson  2 hours ago   54 comments top 16
sharkweek 1 hour ago 1 reply      
This is a really interesting read -

I played a lot in college, but never really got "good" per se. We had a group of about 20 or so of us who played semi-regularly. There were two players who were both far superior to everyone else, but they didn't always win. Just like everyone else, they naturally suffered the turmoil of a bad beat here and there. But aggregate their winnings over the course of the two years we all played together, they came out far ahead of everyone else.

And I think that's probably the main point here. These two knew how to grind out the bad beats and come out on top over a long enough time frame.

I also have a good friend who crabs in Alaska (his dad and him are on the Deadliest catch together) - each season is a "gamble" with how long it will take to catch enough crab to meet quota, but it's not like they're putting money into a slot machine and hoping for the best. There's a system that leads to success. Sometimes they get "lucky" and their season ends quick, others it's more of a lengthened grind.

They seem awfully similar.

mef 1 hour ago 4 replies      
His study of more than a billion hands of online Texas Holdem found that 85.2 percent of the hands were decided without a show of cards. In other words, players betting decisions were of overwhelming importance in determining the outcome. Of the remaining 14.8 percent, almost half were won by a player who didn't hold the best hand but instead had induced the player with the best hand to fold before the showdown.

Isn't inducing the player with the best hand to fold before showdown the same as a hand decided without a show of cards? Or do they mean that 85.2% of hands were decided without seeing the flop? Both don't sound right.

candybar 1 hour ago 3 replies      
As a casual poker player, I find this line of reasoning specious. Sports betting, blackjack and even roulette with a biased wheel are all games of skill by this definition. Poker games are designed specifically to appeal to our gambling instinct so that better players can play the house while worse players gamble away their chips. It's true that from the perspective of the best players, it's not truly gambling, but they are not the ones gambling laws are written to protect. This isn't too different from a casino owner saying - I'm not gambling, I make money every day. You may not be gambling but your customers are. The same is true of poker - even if you're not gambling, your opponents are.

Edit: Will downvoters explain their reasoning? Btw, someone else said something about how poker is a game of luck at low skill level and game of chance at high skill level but that's not really how it works. Poker is a game of skill if you're playing against bad players because the game is consistently beatable with sufficient skills. The better and more sound your opponents become, not just relative to you but on an absolute basis, the more important the role of luck becomes.

sejje 1 hour ago 2 replies      
Any game where you can play badly on purpose has skill involved.

Some games still involve more luck than skill--blackjack, for instance.

Other games involve more skill than luck, and poker certainly falls into that category.

Poker is a game of skill with an element of luck. It is not a game of chance.

ronaldx 54 minutes ago 1 reply      
>His study of more than a billion hands of online Texas Holdem found that 85.2 percent of the hands were decided without a show of cards. In other words, players betting decisions were of overwhelming importance in determining the outcome.

It is worth noting that the other 14.8% of hands, where a showdown is reached, are typically the ones where (much) larger amounts of money are exchanged.

So, betting decisions might decide the outcome of a majority of hands, but it's unlikely this corresponds to a majority of the money flow.

skizm 30 minutes ago 0 replies      
I love poker. I'm not great at it but it is fun. Also it is a great metaphor for life. If you poke your head in on any given hand you might conclude that the game is all about luck. However if you play long enough good players emerge on top and bad players lose their money. Sometimes terrible players win big and sometimes good players lose everything.

Just like life: the harder you work, the luckier you get.

ngokevin 1 hour ago 0 replies      
WSOP Main Event final tables are often used as an prominent exhibit that poker is not "tru gambools". It is the final 9-10 players of a tournament.

Dan Harrington made the final table in 1987 (152 entrants), won the final table in 1995 (273 entrants), and made back-to-back final tables in 2003-2004 (839 and 2576 entrants). Or in a lesser example, Johnny Chan won back-to-back in 87-88 (152 and 167 entrants).

The Cavs winning the #1 pick three times in the last five years, however. That is rigged.

stygiansonic 44 minutes ago 1 reply      
I suppose, like the article mentions, it depends on the circumstances.

It does appear that serious poker players do better in the long run as they not only consider pot-odds when playing but also management of their bankroll to ensure long-term survival. Is there an element of chance? Of course, but it's not the only factor - few things are black and white.

But if someone choose to go all-in blind pre-flop then they are choosing to play the game as pure gambling and their version of the game would be pure chance.

But again, it's all about the circumstances. Poker is often compared to day trading in the markets and though it's not a perfect comparison, there are some analogies. For examples, options can and often are used in complex strategies to either reduce risk or enhance returns. But that doesn't mean they can't be used to speculate wildly - in these situations, the potential payoff profile begins to look something like a straight up bet.

fragsworth 1 hour ago 0 replies      
There is a lot of nebulous, vague law in the U.S. about games when you have the chance to win real money. Each state has its own self-inconsistent set of laws, usually made to preserve the monopolies of neighboring casinos. There is very little actual reasoning behind the laws - it's corruption, plain and simple.

This stifles innovation in real-money games. No company will risk spending any money on the design/development of a new real-money game when there is a tangible chance of going to jail and losing everything. This frustrates me a lot, because I enjoy real-money games (poker included), and I'd love to play new kinds of real-money games, but these freedoms don't exist.

whbk 1 hour ago 0 replies      
Well, now that we (nearly) have that strawman out of the way..I guess now we can shift gears and focus on Sheldon Adelson's 'online poker is so much more dangerous for compulsive gamblers and minors' doozy. Oh, and can't forget about Terrorist Hold 'Em!


erikb 39 minutes ago 0 replies      
Is the poker boom of ten years ago already forgotten? This fact shouldn't be surprising after tv, radio and internet telling everybody exactly this for years. What about the movie Rounders? Is it not watched in the US any more? I still got the DVD and watch it 4-5 times a year.
tom_jones 1 hour ago 0 replies      
The latest statistical analysis suggests that baseball involves more matters of chance than anyone would like to admit. Beyond home runs and strikeouts, it's mostly random. Luck and skill are all around us to varying unmeasurable degrees, including in a game of poker.
prestonbriggs 1 hour ago 2 replies      
I wonder when they'll go after people for gambling in the the stock market? How would you defend them?
darasen 1 hour ago 1 reply      
There is a reason that the top players happen to very good at math.
jqm 56 minutes ago 1 reply      
Roulette is often held up as an example of "random chance" but this is not entirely true.

There are on occasions patterns in Roulette, especially with certain dealers. In addition, there is a little trick casinos sometimes employ. They loosen the legs on the table to the point it wobbles slightly. The dealer will then lean against the table and watch as the ball is about to drop. They can then pop the ball out of an undesired slot by a well placed shove of the hip. Just something to watch out for if you are gambler.

I firmly believe just like in computing, the entire concept of "random" depends mostly on ones frame of reference.

lasermike026 1 hour ago 2 replies      
I have to disagree. Moneymaker, a complete nobody, winning the World Series of Poker.
Algorithmic tagging of Hacker News or any other site algorithmia.com
55 points by doppenhe  5 hours ago   38 comments top 17
Goosey 4 hours ago 1 reply      
Looking at the hn demo, I'm impressed. There are definitely relevant tags being generated. Unfortunately there also some noisy tags which clutter the results. Taking one example, the post "DevOps? Join us in the fight against the Big Telcos" given the tags "phone tools sendhub we're news experience customers comfortable", I would say that "we're" is unarguably noise. Another example, "Questions for Donald Knuth" with tags "computer programming don i've knuth taocp algorithms i'm" I would call out "i've" and "i'm".

There are other words in both examples that I personally would not use as tags, but I can't really say they would be universally not-useful. I think a vast improvement could be made just by having a dictionary blacklist filled with things like these - from this tiny sampling contractions seem to be a big loser.

vhf 1 hour ago 1 reply      
Very interesting.

I have been doing some research towards automatic tagging lately, and I found several Python project coming close to this goal : https://pypi.python.org/pypi/topia.termextract/ , https://github.com/aneesha/RAKE , https://github.com/ednapiranha/auto-tagify

but none of them is satisfying, whereas Algorithmic Tagging of HN looks pretty good.

I have been trying to implement a similar feature for http://reSRC.io, to automagically tag articles for easy retrieval through the tag search engine.

snippyhollow 4 hours ago 1 reply      
I did that in 2012 for a pet project with a friend https://github.com/SnippyHolloW/HN_stats

Here is the trained topic model (Nov. 30, 2012) with only 40 topics (for file-size mainly) https://dl.dropboxusercontent.com/u/14035465/hn40_lemmatized...

You can load it with Python:

  from gensim.models import ldamodel  lda = ldamodel.LdaModel.load("hn40_lemmatized.ldamodel")  lda.alpha = [lda.alpha for _ in range(40)]  # because there was a change since 2012  lda.show_topics()
Now if you can figure out what is this file:https://dl.dropboxusercontent.com/u/14035465/pg40.params I'll pay you a beer next time you're in Paris or I'm in the Valley. ;-)

dlsym 3 hours ago 0 replies      
This has real poetic potential:

    "Erlang and code style"     process erlang undefined    file write data    true code

zokier 3 hours ago 0 replies      
After watching "Enough Machine Learning to Make Hacker News Readable Again"[1] I thought of recommendation engine/machine learning based linkshare/discussion system (eg HN/reddit style). Your frontpage would be continuously formed by your up/down-votes. I'm not sure if the same could be applied to comment threads too, essentially creating automatic moderation. Algorithmic tagging would certainly be useful for that kind of site.

[1] https://news.ycombinator.com/item?id=7712297

NicoJuicy 3 hours ago 1 reply      
I like this project (i am creating something like this, so i'm pretty serious).

But doesn't the auto-tagging feature make to much noise for a business use-case? For example, it tags a article of Amazon and includes Google in the tags. White-listing words wouldn't fix this (Google is a whitelisted word if Amazon is).

I don't know about LDA though. Perhaps a proper tag administration would fix this, but then you'd have to remove tags on the go.

NicoJuicy 4 hours ago 1 reply      
Lol, this seriously took me by suprise. I'm currently developing a HackerNews with tags (you can self host it). I quickly generated this Google Form, if you are interested for being a beta user in the nearby future


PS. Screenshot included + it's already in alpha in a company with 100 users.

gibrown 3 hours ago 1 reply      
LDA/Topic Modeling is interesting stuff. I always feel like the way this data gets surfaced as "tags" is very ineffective. Any non-tech person would look at this and generally be confused. So this item is triggering my rants against tagging:- Tagging is like trying to predict the future. What word will help some future person to get to this content?- Tagging often tries to fill the hole left by bad search- There is no evaluation method to measure how good a set of tags are- Tags make very bad UI clutter.

Some of these points are related to encouraging users to tag content, but auto-tagging also seems problematic.

To me something more along the lines of entity extraction is more useful because it is a well defined problem, and can be used to improve a lot of other applications.

shawabawa3 4 hours ago 1 reply      
Doesn't seem to handle pdfs properly. For the mtgox link it comes up with

> stream rotate type/page font structparents endobj obj endstream

draz 1 hour ago 1 reply      
@doppenhe - any hunch as to how well it would work on transcripts?
platypii 5 hours ago 1 reply      
Direct link to HN with tags: http://algorithmia.com/demo/hn
NKCSS 3 hours ago 0 replies      
Not too impressed to be honest; singular/plural forms are not treated equal; not familiar with LDA, but I've written and LSA implementation in the past, and it did a lot better than what is shown here.
doczoidberg 2 hours ago 0 replies      
With german sites it does not work so well. There is no blacklist for to generic terms for other languages than english?
Theodores 3 hours ago 1 reply      
This does well on the 'T Shirt test' on some sites, e.g. http://www.riverisland.com/men/t-shirts--vests

This could be really useful in ecommerce for creating search keywords for category pages. The noise in the results matters not, so long as it gets 'T-Shirt' and someone searches for 'T-shirt' then all is well and good.

Are you looking to plug what you have into something such as the Magento e-commerce platform? The right clients could pay proper money for this functionality. It is something I would quite like to speak to you about.

hnriot 1 hour ago 0 replies      
Maybe also take a look at AlchemyAPI
justplay 1 hour ago 0 replies      
looks cool.
EGreg 3 hours ago 1 reply      
LDA is very impressive. But it might be better to have an iterative algorithm that forms a linear-algebraic basis from several tags (and let people add more tags as vectors into the mix) and then every time people upvote something, you update their interests (points in the linear algebraic space) and then every time an article gets upvoted you update ITS tags ...

after a while the system converges to a very useful structure and new members can see correctly tagged articles and the system learns their interests by itself

do you know anything like this already existing?

Show HN: Learn iOS development by building apps iosacademy.io
7 points by johnfig  46 minutes ago   2 comments top 2
sharp11 9 minutes ago 0 replies      
Don't require a signup just to find out how you approach teaching. Also, nice landing page but you don't explain why you're qualified to teach.
coltr 12 minutes ago 0 replies      
Cool. You may have some tough competition with Treehouse.
Show HN: SecurityOverboard, a job board for information security positions securityoverboard.com
13 points by icanhasfay  2 hours ago   3 comments top 2
daenney 5 minutes ago 0 replies      
The name/URL caused me to assume it was a satiric site about throwing security overboard.

It looks pretty neat, though I'm wondering how you'll make sure the content stays up to date. Manually crawling or relying on the community to post them might let you down.

freehunter 48 minutes ago 1 reply      
This could be really helpful (I'm sitting at work right now as an information security employee, so I don't want to browse the site too much). I was recently looking for a new job in information security as an experienced candidate and many of the existing job search sites really let me down. I know this is one of the fastest growing segments of the industry, there's next to no unemployment, and salaries have potential to be incredible. But when I was looking, there was nothing. Nothing on Monster, nothing on CareerBuilder, and hardly even anything on Dice. I found more positions by going to a company's website and looking through their individual openings.

I don't know if it's that companies don't want to post their security positions or if the existing sites are poorly equipped to meet that specific need, but it was just terrible. Did I end up at the place I wanted for the salary I wanted? Sure. Did I end up at the best place I was capable of working at the highest salary I am capable of earning? I doubt it, because, according to Monster, CareerBuilder, and Dice.com, there are no information security positions within 50 miles of Pittsburgh and there haven't been for at least the past six months.

Automate the iOS build/test/deploy cycle with Distiller and Hockey distiller.io
12 points by jimdotrose  1 hour ago   4 comments top 3
BenSS 2 minutes ago 0 replies      
I want this, but: The product is not explained. It actually took a bit of clicking to finally find the setup process under documentation.

Minor nit, your blog header doesn't link back to the main site.

jc4p 43 minutes ago 1 reply      
I wish this was a link to the actual documentation linked in the blog post rather than this incredibly short post which is nothing but a sales pitch. The idea looks nice, but I couldn't help but laugh at the "Use Safari" warning in the documentation log. Why would I need to use Safari to download a simple file from the Developer Center?
neiled 1 hour ago 0 replies      
I was just looking for something like this. I missed my push->circleci->test->deploy workflow when I moved to iOS development. Will be having a serious look at this later.
TrustEgg Partners With the NBCI to Open One Million New Savings Accounts yahoo.com
4 points by Bricejm  19 minutes ago   discuss
HBO programming available on Amazon Prime amazon.com
4 points by brokentone  23 minutes ago   discuss
Open-Source Static Site Generators staticgen.com
48 points by hihat  6 hours ago   33 comments top 14
rogerbinns 3 hours ago 1 reply      
I use Nikola. On the tech side the developer decided to model it as a series of tasks such as render photos, render posts, build rss etc. Those tasks are also implemented to run incrementally. For example if you add a new post then only that new post gets rendered to the output directory.

While this sounds fine in principle, it soon gets messy. A configuration change may require everything to be rebuilt. But if you rename or delete a source file it didn't track how that ended up in the output directory so you end up with a slow accumulation of crud. It turns out to be fairly easy to confuse all the incremental logic and end up with messy builds. Add poor decisions like using the datestamp of items in the output directory to build the sitemap (they don't set output directory datestamp to that of the input items) and you get important failings.

In my opinion, incremental builds should give the same results as full clean builds or you have a fragile non-repeatable build - something that is very undesirable. I wrote a bit more at http://www.rogerbinns.com/blog/on-nikola.html

cheshire137 10 minutes ago 0 replies      
I told my friend he should list his, Zodiac, on there. https://github.com/nuex/zodiac
pnathan 35 minutes ago 0 replies      
Heh, I was working on a SSG early last summer for my own stuff in Common Lisp. It's easy to make a bad and overcomplicated one. I know - I did it. :)
TillE 4 hours ago 0 replies      
It would be nice to have a more detailed feature comparison to get some idea of why I should choose one over the other.

I'm currently trying out Pelican just because I'm comfortable with Python and Jinja2 already, but I really have no idea if it's actually the best choice for the kind of site I'm trying to build.

syntern 2 hours ago 2 replies      
Is there a static site generator with support for multiple authors and multiple languages?

e.g. with archive pages like /<author>/<lang>/<yyyy>/<mm>/and category pages like /<lang>/<cat>/<subcat>/

I fail to see any. I've tried to configure many of the available generators, but no one seem to have the same itch I have.

nness 5 hours ago 2 replies      
I've been involved in some incredibly large front-end development projects and Middleman was core to our FED workflow. Its a fantastic tool.
Todd 2 hours ago 2 replies      
I don't see any option for C# site generators. Maybe they don't make it into the Top N.

(Disclaimer: I'm biased, I wrote one).

currysausage 1 hour ago 0 replies      
Seriously, who came up with the idea that you should have to scroll down at least one full screen page in order to see just the tiniest bit of actual content?

On the home page - okay, if you need to do so. But why, just why on content subpages? Do you really believe that your purple banner with Roboto is that incredibly beautiful that I want to just stare at it for a minute before scrolling down to the meat, each single time that I click on a link?

"The computer industry is the only industry that's more fashion-driven than women's fashion." Damn, Ellison was so right about that.

Don't get me wrong, I am actually a typographer, and I am the last person on earth who wouldn't value appropriate white space. But ... seriously, find the right balance!

yaddayadda 3 hours ago 1 reply      
Metalsmith (metalsmith.io) actually uses consolidate.js as a template manager. consolidate.js supports 24 separate template engines (https://github.com/visionmedia/consolidate.js). So Metalsmith's "templates" entry that only lists "Handlebars" is extremely misleading.

I suspect there are other generators on staticgen.com that also use consolidate.js and suffer the same misrepresentation. (I just don't know the other generators well enough to know off the top of my head which ones use consolidate.js.)

Of course, an error such as this makes me doubt all the other information presented on staticgen.com

dstroot 3 hours ago 0 replies      
Octopress is a "built out" blogging example built on top of Jekyll. Showing Jekyll (a static site generator) and Octopress (a good example of how to build stuff on top of Jekyll) is a little misleading/confusing.

Splitting hairs maybe...

robobro 2 hours ago 0 replies      
I recommend Blosxom whole-heartedly.
Arkadir 2 hours ago 0 replies      
"... on GitHub."
tekahs 4 hours ago 3 replies      
What's the difference with http://staticsitegenerators.net ? Why another one ?
cauterize 6 hours ago 0 replies      
I love this. However, it would be interesting to see if you could add other metrics, age of repository, average mailing list posts per week, etc.
New two factor authentication gem for rails (using Devise) tinfoilsecurity.com
3 points by borski  10 minutes ago   discuss
Level3 is without peer, now what to do? cringely.com
703 points by mortimerwax  1 day ago   361 comments top 34
ChuckMcM 1 day ago 14 replies      
There is an interesting unbalance because Comcast has so much leverage by owning the last mile, they can push around Tier 1 providers. I'd like to fix that, mostly by creating a public policy around municipally owned Layer 1 infrastructure between customers in their cities and a city exchange building. Conceptually it would be no different than the city owning the sewers and outsourcing the water treatment plant to a contractor (or two). Creating a new "ISP" would involve installing equipment in the City Exchange(s), providing compatible customer premises equipment to subscribers, and then patching their 'port' at the City Exchange to the ISP's gear.

Its going to be a long conversation :-)

mokus 19 hours ago 0 replies      
> Nobody paid anybody for the service because it was assumed to be symmetrical: as many bits were going in one direction as in the other so any transaction fees would be a wash.

The justification for peering is not equal traffic, it's equal value - my customer wants to communicate with your customer. Regardless of the direction of traffic, the traffic is equally valuable to both of us because the traffic is the primary thing our customers are paying us for.

Unless, of course, I can get you to pay me for it anyway because of some unrelated advantage such as the fact that your customers can leave you more easily than mine can leave me. Comcast and others are attempting to leverage exactly that - in many regions they have no viable competition whereas Netflix and L3 are much more replaceable in their respective markets. This is a prime example of abuse of a monopoly.

mhandley 1 day ago 9 replies      
Would be very interesting to see what happened if the big CDN providers just depeered Comcast for 24 hours. Would certainly cost Comcast a fair amount in customer support calls, bad publicity, and properly bring the debate to the general public.
JoshTriplett 1 day ago 7 replies      
Somebody has to pay the money to upgrade the equipment and bandwidth available at these exchange points. The very reasonable argument in this article is that the ISPs should pay that cost, which seems reasonable given that their customers are demanding it. It sounds like the ISPs are playing a game of chicken, trying to see if their peers like Level3 will throw money in to pay for the ISP to upgrade its equipment and bandwidth. That's certainly something the ISPs can try to do; on the other hand, what are their customers going to do, not use Netflix and YouTube? If a pile of customers of one ISP start reporting that they're all having a poor experience with high-bandwidth video, and there are a pile of well-publicized press releases blaming the ISP, customers will start complaining to the ISP, and they'll have to upgrade their infrastructure eventually. (And in areas where they have competition, there's an incentive to upgrade before the competitors, to avoid losing customers; while there isn't such competition in every locale, there are enough locales with more than one ISP choice to make those customers painful to lose.)

But what does any of that have to do with mandated peering requirements at the NSFnet exchanges? Who would enforce that, and why, when any two major networks can set up peering at any number of meet-me rooms? Requiring that an ISP peer as much traffic as is available or not peer any at all seems ridiculous; some ISPs will suck more than others, but that's the problem of them and their customers, not a problem for the entire Internet.

Meanwhile, I'm surprised there aren't more startups and VCs looking to bet that "new ISP that doesn't suck" is a viable business model. People are chomping at the bit for Google Fiber, which seems unlikely to grow to a national level without developing competitors. This is a space with very few competitors, and there hasn't been serious competition in that space since DSL stopped being a viable option.

pessimizer 1 day ago 2 replies      
This cable-menu style image from the comments is scary: http://i.huffpost.com/gen/1567010/original.jpg
jrochkind1 1 day ago 1 reply      
You would think, okay, if Comcast is terrible at maintaining sufficient peering for it's customers needs -- and if the OP proposal to throw Comcast out of peering exchange points happened, that would certainly lead to increased terribleness for it's customers -- then eventually it's customers would choose a different ISP. The market would solve it.

The problem is that in many many markets, Comcast (or another ISP) are pretty much the only choice. Customers don't have another option, no matter how much Comcast underfunds it's peering infrastructure or gets thrown out of peering exchange points.

So what is the consequence to Comcat for underfunding? What is the consequence to Comcast for even such a disastrous outcome as getting kicked out of the peering exchange point? Not a lot.

I'm not sure what the solution is, but 'regulate them as a common carrier' is certainly part of it, since they are a monopoly, and the common carrier regulatory regime was invented for exactly such a monopoly.

cobookman 23 hours ago 4 replies      
I've previously interned at one of the mentioned Cable Companies, and I see both sides.

The solution is to make it 'capitalistic'. Change all of our internet contracts from Unlimited (up to 'x' GB/month), to a simple $/gb cost.

It would be in the ISPs best interest to provide their customers the fastest internet connection as possible. E.g, if a customer can stream a 4k video vs SD then the ISP would make more money per unit time.

Think of it this way, if comcast charges $0.25/GB, and a netflix SD show is say 1GB and HD is 4GB, then comcast grosses $1 for HD and $.25 for SD for the same customer streaming request.

Over time its likely the price per GB would decrease, just like it has for cellular.

On a more evil side, this would also stop chord cutters. Pirating content is no longer 'free', and Netflix would cost significantly more than $10/month ($10/month + 'x'GB * $/GB).

As for what rates to expect, if comcast charges in ATL $30-55 for 300GB, that'd be about $.10/GB to $.20/GB. As for speed tiers in a $/gb system, your guess is as good as mine.

signet 1 day ago 2 replies      
If a customer is paying for an internet connection, they are paying for access the full internet, to the best of their ISP's abilities. This is the net neutrality law we need: ISPs should be compelled to upgrade their backbone links as they become congested, to satisfy their customer's demand. Congestion can be easily monitored and often these peerings are "free". (Yes there is a non-zero cost to increase switch and router capacity and to have someone plug the cables in, but it's not like Level3 is charging for the bits exchanged.) But the point is, since most ISPs are de-facto monopolies in this country, we need rules telling them they have to upgrade their capacity to meet their customers demand, if they are promising broadband speeds.
fragsworth 1 day ago 0 replies      
The proposed solution is at the bottom of the article (which is why everyone seems to have ignored it):

> The solution to this problem is simple: peering at the original NSFnet exchange points should be forever free and if one participant starts to consistently clip data and doesnt do anything about it, they should be thrown out of the exchange point.

I do have a couple questions though - who is in charge of the original NSFnet exchange points, and do they have this authority?

brokenparser 1 day ago 2 replies      
This wouldn't happen if those ISPs didn't have local monopolies. Networks should be opened by selling traffic wholesale to other companies so that they can compete for subscribers on those networks. The network owners would have more than enough money for upgrades and if they don't, downlevel ISPs will sue them.
jamesbrownuhh 1 day ago 0 replies      
The UK experience is, broadly speaking, that any ISP who is sufficiently tall to have the appropriate interconnects can offer a service to a customer via the incumbent's last-mile infrastructure. (This is for telephone-based ADSL broadband - the UK's only cable operator is not bound by this.) But, furthermore, competitor ISPs are enabled to install their own equipment and backhaul directly into local exchanges, known as LLU - local loop unbundling. LLU allows competitor companies to provide just your broadband, or your voice telephone service, or both.

There is one further step, whereby the prices of the incumbent monopoly are regulated in areas where no competition exists. Ironically this works in the opposite way to how you'd think, as it forces the incumbent NOT to offer their lowest prices in that market - the intention being to make monopoly areas prime targets for competition and to ensure that potential competitors aren't scared out of the area by predatory pricing.

It's an odd system with good and bad on both sides, but it seems a lot better than being stuck with a single source of Internet access.

timr 1 day ago 3 replies      
People keep claiming that there should be a "free market" for bandwidth...but then they say that the ISPs should have to absorb the costs of peering (which can be significant -- the hardware isn't free) without passing the costs on to anyone. The backbone providers complain when the cost is passed to them; the consumers scream bloody murder when the costs are passed to them in the form of a bill.

Obviously, there's no free market in the status quo: we (consumers) basically expect to pay a low, ever-declining price for bandwidth, while someone else eats the costs of a growing network infrastructure. There's an economic disconnect, and legislating that it shouldn't exist seems worse than futile.

I say: pass the costs on to the consumer, and break down the monopolies on last-mile cable service. If the cable companies had to compete for subscribers, they could still pass on the costs of improving their infrastructure, but they'd have to compete with everyone else to do it.

In other words, the problem here isn't "net neutrality" -- it's that we've got a monopoly at the last mile that we need to destroy.

ry0ohki 1 day ago 1 reply      
"and make their profit on the Internet because it costs so little to provide once the basic cable plant is built."

That's some big hand waving, because laying the cable costs a fortune, and takes many years to recoup the cost which is why there is so few are competing for this "super profitable" business.

guelo 1 day ago 1 reply      
If the peering ports are congested that means that either the ISP needs to add more ports, or they are oversubscribing their capacity. Just make it illegal to sell more capacity than you have and the problem is solved.
rrggrr 1 day ago 0 replies      
Godaddy, Rackspace, Google, Amazon, etc. have skin in this game. With multiple redundant network connections they could, for a day or a week, defend neutrality by shaping their traffic to the lowest common denominator or routing their traffic to avoid the peer's punitive bottlenecks. Today its Level 3 and Netflix, but tomorrow it could easily be them.
xhrpost 1 day ago 2 replies      
I like the article overall but I don't understand the author's proposed solution. The issue as it stands is apparently a lack of peering, in that big ISPs are using transit to reach large content providers rather than directly peering to those networks. So how would "kicking them out" for a maxed out connection work? If I buy transit from Level3 and my connection maxes out, I'm no longer allowed to be a customer of Level3?
guardiangod 23 hours ago 0 replies      
I don't know why everyone is up in arm over this. Here is reverse thinking and a perfect oppoturnity for everyone.

The current situation is that Comcast doesn't have the equipment/resources to handle extra internet traffic at its peers. Most people want Comcast to buy more stuff to handle it, why don't we think the opposite way- get Comcast to decrease its amount of traffic?

If we can get Comcast to consume less traffic, they wouldn't have to complain to other peers about load asymmetry.

The best way to decrease traffic? Make Comcast has less customers.

Why does Comcast has so many customers, even though their resources cannot handle it? Because they have a government mandated monopoly in the last mile, so they are forced to have more customers than what they can handle.

We can come to a conclusion that last-mile monopoly -> network congestion -> forcing L3 to pay for peer.

If Comcast has to compete with other ISPs for last miles, the traffic load would shift from 1 single entity (Comcast) to 10+ smaller ISPs. In such case the traffic load problem would not exist.

Another solution is to breakup Comcast.

See? This is a perfect opportunity. Comcast can has its multi-tier network, but at the price of the last mile monoploy. After all, if they want to have the right to choose peers, we customers should also have the right to choose ISPs.

Eye_of_Mordor 4 hours ago 0 replies      
Lack of competition all around - just break up the big boys and everything will be fine...
jvdh 13 hours ago 0 replies      
Just for scale, backbone links these days are not 10 gigabits/sec, more in the order of 40-100 gigabits/sec.

The Amsterdam Internet Exchange is the largest and most important exchange in Europe, and it's peak traffic each day reaches 3 terabits/sec.

sbierwagen 1 day ago 1 reply      
Regulate ISPs as utilities.
eb0la 12 hours ago 0 replies      
The problem is not to peer or not to peer. The problem is WHERE to peer.

I work for a european ISP and the problem we have is the location of the peering. Big content providers will happilly peer with you in, say, Palo Alto or Miami; but they will refuse to add a peering connection in Europe. Why? because today the problem is about WHO pays the Intercontinental route (which limited and is expensive bandwidth).

Level3 is known in the industry as a pioneer for bit-mile-peering agreements. This means you have to sample the origin and destination of the IP packets and make some calculations to know how many miles the packet has traveled and pay / get paid if someone dumps long haul traffic to a peer. Getting to this is complicated with current tecnhology and many companies are refusing to peer with Level3 because they don't know what will happen with their business with bit-mile-peering agreements.

mncolinlee 1 day ago 0 replies      
I can't help but wonder if the RICO Act applies to this sort of extortion. My first thought was FCPA, but none of the ISPs involved can likely be construed as "foreign officials." The behavior can be described as demanding kickbacks, however.
tom_jones 13 hours ago 0 replies      
Along these lines, can someone ask whether net neutrality ever existed at all? Akamai and F5 have been helping big corporations like Disney circumvent internet bottlenecks for over a decade now. Those who have had the money have managed to purchase faster delivery schemes for over a decade.Could it be, then, that telecommunications companies are consolidating so that they can extort money not from the small guys, but from the big guys? Are Hulu, Netflix and others willingly submitting to the extortion because they see no other way out?To be sure, the telecommunications industry is in desperate need of regulation because providing good service at a reasonable price for a reasonable profit is not good enough for them.
Rezo 18 hours ago 0 replies      
The extreme download vs upload traffic asymmetry between Comcast and L3/Netflix has been mentioned several times as a straw man argument for why Comcast is justified in charging Netflix directly.

Maybe Netflix could find some creative uses for all that idle viewer upload capacity to reduce the deficit ;)

- Have every Netflix client cache and serve chunks of the most popular streams P2P-style. You could have a DHT algorithm for discovering chunks or have Netflix's own servers orchestrate peer discovery in a clever way, for example by only connecting Comcast customers to peers physically outside of Comcast's own network. This would reduce Netflix's downstream traffic and increase viewer uploads.

- Introduce the Netflix-Feline-Image-KeepAlive-Protocol, whereby every Netflix client on detecting a Comcast network uploads a 5MB PNG of a cat to Netflix's servers over and over again while you're watching a video. Strictly for connection quality control purposes of course.

neil_s 1 day ago 1 reply      
Since everyone is pitching their own solutions, how about I post mine. Let's take the example of Netflix and Comcast. Instead of no deal with Comcast, and thus giving Comcast Netflix users really slow or no service, Netflix should make the deal for now, and tell subscribers that if you use Comcast the Netflix rental is higher. By passing off the higher costs to the users, Comcast customers are given the incentive to switch ISPs.

Everyone shows loss aversion, and so will be determined to find out why being on Comcast gets them penalised. They will learn about its dick moves, and complain to Comcast to make them remove these fees so they can access Netflix, which they have already paid for access to.

rossjudson 22 hours ago 0 replies      
Level3 should drop the same percentage of outbound packets from Comcast, that Comcast drops on the inbound. If every tier 1 did, Comcast's internet service wouldn't look all that good any more, would it?
Havoc 1 day ago 0 replies      
> Its about money and American business, because this is a peculiarly American problem.

Hardly. We've experienced the whole interconnect brinkmanship locally too (South Africa). Its actually quite the opposite - the interconnect things are a lot nastier in other countries because it tends to be paid for (powerful co vs underdog) whilst the bigger US setups seem to run mostly open peering.

api 1 day ago 5 replies      
While I agree with the general thrust of the article, there is one fallacious argument here.

Cringely argues that cable breaks even and money is made on the net, but that's an artificial distinction. What if cable disappeared? Would they still make money if they had to pay for the upkeep of the network with only Internet fees? The desperation and risk of this game of chicken convinces me that the answer might be "not much." The loss of cable might very well be apocalyptic for these companies, at least from a shareholder value and quarterly growth point of view.

What's happening is very clear to me: the ISPs are trying to either harm the Internet to defend cable or collect tolls on streaming to attempt to replace cable revenue. That's because cable is dying a slow death. This is all about saving cable.

The fundamental problem is that cable ISPs have an economic conflict of interest. They are horse equipment vendors that got into the gas station business, but now the car is driving out the horse and their bread and butter is at stake.

keehun 1 day ago 2 replies      
Maybe I'm nave beyond any recognition, but shouldn't the ISP's or whoever is peering charge based on the bandwidth amounts? It sounds like they have a flat-rate contract with each other and now they're charging more?
swillis16 1 day ago 0 replies      
It will be interesting to see how gaming download services such as Xbox Live, PSN, Steam, etc would be affected as there as the file size of video games gets larger due to advances in the video game industry.
snambi 1 day ago 1 reply      
why there are no last mile providers like comcast and ATT?
droopybuns 1 day ago 4 replies      
So on the one side is the fat-cat ISP who doesn't want to make expensive capital investments ih their transport.

And on the other side is the fat-cat vc funded video content providers, who don't want to pay for the their mp4-based saturation of all the pipes.

This is a negotiation. There are two active media campaigns that are trying to gin up our anger against The Other Guy (tm) as part of their negotiations. I just can't get invested in this nonsense.

lifeisstillgood 1 day ago 2 replies      
1. Peering is based on equal traffic both ways.At the moment we tend to download gigabytes with a few bytes of request. As video-communications really takes off (yes chicken and egg - see below) this will get lost in the noise

2. rise of ad-hoc local networksThis might come out of mobiles, this might be me dreaming, and it might come with sensible home router designs, but ultimately most of the traffic I care about probably originates within 2 miles of my house - my kids school, traffic news, friends etc

A local network based on video comms - that will never happen. just like mobile phones.

3. electricity and investmentIn the end this is down to government investment. Let's not kid ourselves, gas, water, electricity, railroads, once they passed some threshold of nice to have into competitive disadvantage not to have, governments step in with either the cash or the big sticks.

Fibre to the home, massive investment in software engineering as a form of literacy, these are the keys to the infrastructure of the 21C and it's a must have for the big economies, and it's a force multiplier for the small.

spindritf 1 day ago 11 replies      
Except its actually right (not wrong) because those bits are only coming because customers of the ISPs you and me, the folks who have already paid for every one of those bits are the ones who want them.

What is the source of the notion that, because you paid for your consumer broadband, all bits are paid for and the charge for carrying them cannot be split with the other side of the connection? Why is it so bizarre that both sides of the connection have to pay for it? Because you're used to your phone working differently?

As an analogy, you know how you used to pay for a subscription to a magazine and there were ads inside which advertisers (the other side of the connection via the magazine in this case) also paid for? The magazine split its fee in two: you paid part of it, and the advertisers paid the other part. It's the same here.

There is nothing fundamentally wrong with charging both sides. You may prefer a different fee structure but a better argument than "I already paid for it!" is necessary.

Show HN: HipstaDeploy Generate and deploy static websites on CloudFront github.com
8 points by proudlygeek  1 hour ago   4 comments top 3
proudlygeek 1 hour ago 1 reply      
Author here: Happy it can be useful to others :)

Right now i've only tested it with a local Ghost installation but theorically it should work with every blogging platform since it uses wget to fetch pages. Since it's basically a very silly and dumb shell script i didn't want to use something platform specific such as grunt / gulp / rake / whatever.

Feel free to give me your feedback about it (Bugs, Features, etc.)

steadicat 1 hour ago 0 replies      
If anyone is interested in a Node.js alternative, its pretty straightforward to use Gulp (http://gulpjs.com/) for this. Take a look at the deploy task for my personal web site for an example: https://github.com/steadicat/attardi.org/blob/master/gulpfil...
oddevan 1 hour ago 0 replies      


Currently I'm using a local Wordpress install and a static site plugin and I'm getting kinda fed up with the gotchas. (Detail and links at http://www.oddevan.com/about ) If this does what you say it does, not only can I automate the deploy process, I can move over to Ghost (which I'm already using for its superior Markdown editor).

Proxy.app Lands In App Store websecurify.com
52 points by passfree  6 hours ago   50 comments top 13
tptacek 3 hours ago 2 replies      
To replace Burp, you need at a minimum a proxy that reliably works at high speed on a huge variety of sites, a UI for capturing, altering, and resending requests, and a "fuzzer" that will transform a template request according to a ruleset.

This turns out to be a pretty difficult thing to build well. Every pentester who can code that owns a Mac wants to rewrite Burp; I myself have died on that hill several times. But rewriting Burp is a bit like rewriting Microsoft Word; so much of the value is in the details, and there are so. many. details.

monkey_slap 3 hours ago 3 replies      
Would you be able to give any examples, aside from being "native" (which I'm assuming means its built on AppKit+Cocoa) as to what your app does differently or better than Charles? I use Charles pretty frequently but my co workers haven't gotten on board yet.

As an aside, IMO claiming that Charles isn't native is a little disingenuous. Yes it's built with Java, but I wouldn't dismiss it as non-native (unless you're using native in the purest of ways, meaning that its "natural" - e.g. Cocoa - to the system). I consider tools like Eclipse and IntelliJ "native" even though their UI may be poor compared to Xcode.

sdevlin 4 hours ago 1 reply      
Burp Suite has a pretty rich feature set. How does this compare? For example, is something scriptable like Burp Intruder included (or planned)?
hartator 2 hours ago 1 reply      
That's a great news. We have been using fiddler inside a windows virtual machine just to do that.

I know everyone is asking the same kins of question but how does it compare to fiddler?

Our use case is kind of simple. We are building APIs in top of websites that don't have APIs. So a lot of "spying" is required. We have to use fiddler because it was the only one to correclty handle flash forms, file upload forms and to be able to globaly search for a particular token in every requests has been so much of a life saver.

yincrash 4 hours ago 2 replies      
I use Charles very often, but I use it to see requests coming from a mobile device. Can I proxy SSL connections from mobile with this? Will it let me use a custom self signed cert to do the SSL proxy?
joshcrowder 4 hours ago 1 reply      
It would be good to get a demo of this. I'm unsure of the benefits over Charles at the moment is it just native? I'm happy to pay for software that looks better but I'd like to try it out first.
hopeless 4 hours ago 1 reply      
I was only introduced to Burp at Scot Ruby conf last week as a tool for web security testing. If Proxy.app is targeting the same domain, it seems to provide none of the tools that Burp does.

Burp was a pain to setup (OS X makes installing and using java from the commmand-line a ridiculously complicated process) but it looks 1000x more useful than Proxy.app is.

On a marketing note, you might want to think about who your market is and what job they use a proxy for. It doesn't seem to offer anything for a security researcher but maybe there's enough there for a web developer.

hartator 2 hours ago 1 reply      
Ha! Another question, do you plan to be able to filter by applications?

When you have a lot of tabs open, there is always pollution that get added to what actually matters.

ahmadeus 4 hours ago 1 reply      
I just downloaded it, and it is not intercepting anything? what setups should I do for it? Charles works out of the box!!
fideloper 4 hours ago 1 reply      
Is this mostly for monitoring, or does it have other uses? I'm unclear based on the app store and linked pages.
iNeal 4 hours ago 1 reply      
Any significant advantages of using this over mitmproxy?
BradRuderman 2 hours ago 0 replies      
Has anyone done a comparison to find the best mac equivalent for Fiddler?
mihaela 2 hours ago 0 replies      
When it comes to me, nothing beats Wireshark, but this app looks neat.
eBay customers personal data was compromised in March ebayinc.com
149 points by patchoulol  5 hours ago   122 comments top 26
panarky 4 hours ago 5 replies      
The spin is atrocious. The big story is not the headline, that users must change passwords.

The big story is that ebay leaked personally identifiable information. Naturally this is buried four paragraphs down.

  The database, which was compromised between late February and  early March, included eBay customers name, encrypted password,  email address, physical address, phone number and date of birth.
Don't patronize me with empty platitudes like "changing passwords is a best practice".

Tell me to brace for an inevitable wave of phishing and identity attacks.

Tell me that bad guys will try to steal my other online accounts with this information.

Tell me to trust no one because bad guys now look legit with my home address, phone number and DOB.

Pro tip: put the real story in the headline. That's also a "best practice".

UVB-76 2 minutes ago 0 replies      
Remember a couple of months ago when Icahn described eBay as the worst-run company he'd ever seen? [1]

Seems rather prescient now. Their incompetence has just cost us all our personal information.

[1] http://www.cnbc.com/id/101467290

leorocky 4 hours ago 5 replies      
> The company also said it has no evidence of unauthorized access or compromises to personal or financial information for PayPal users. PayPal data is stored separately on a secure network, and all PayPal financial information is encrypted.

Ebay being hacked kind of scares the hell out of me because PayPal has my checking account information with direct access to withdraw funds. A hacker could rob me blind. Like seriously the owner of PayPal should not be telling me this "we have no evidence of" bullshit because there's no alternative to PayPal that online stores actually use and changing your checking account number and routing number is very very painful. You have to get new checks, you lose checking history. Fuck.

jgrahamc 5 hours ago 0 replies      
Has anyone received an email from eBay about this? I'm guessing that the phishers are going to be faster at getting out fake change password emails than eBay themselves.
danielweber 5 hours ago 3 replies      
FWIW, "ebayinc.com" totally screams "phishing attempt" to me.
orbitingpluto 4 hours ago 5 replies      
Since PayPal == eBay, I just went to change my PayPal password as well.

PayPal went full retard. The security confirmation question?

Please supply your full credit card number ending in ####.

Um, that's the information I'm trying to protect in the first place.

edit: sorry about the "full retard" - trying to quote from Tropic Thunder/RDJ. did not mean to offend

wrboyce 4 hours ago 2 replies      
"The database, which was compromised between late February and early March, included eBay customers name, encrypted password, email address, physical address, phone number and date of birth. However, the database did not contain financial information or other confidential personal information."

So, just my entire identity then? eBay really seem to be down-playing the severity of this.

AdmiralAsshat 3 hours ago 0 replies      
Week 1: "We have no reason to believe that any confidential information has been compromised."

Week 2: "We have observed some limited and negligible instances of credit card information being compromised that coincidentally happened to be linked to eBay accounts. We consider this purely coincidental and feel it is no cause for concern."

Week 3: "Oh god they took everything."

dang 2 hours ago 0 replies      
We changed the title because, as users pointed out, it was misleading.
freehunter 5 hours ago 2 replies      
>Cyberattackers compromised a small number of employee log-in credentials

This bothers me. No one cares how many employee logins were stolen. It only takes one to cause a huge amount of damage. Is anyone reading this thinking "oh, it's okay, they didn't take too many employee logins"?

davb 4 hours ago 3 replies      
And neither eBay nor PayPal allow me to paste a secure password from KeePassX. sigh

Edit: I can now paste on eBay (not sure what went wrong the first time) but PayPal is still actively preventing pasting a new password.

brador 4 hours ago 1 reply      
Is this only for ebay US or are other country versions affected too?
ExpendableGuy 1 hour ago 0 replies      
So I logged into eBay for the first time in over a year to change my password, and noticed that eBay edited my reply to a buyer's feedback.

Has anyone else heard about eBay doing this? I have no way to edit it back to the way it was from what I can tell. It's infuriating -- they changed the word "Buyer" to "Seller" to make it sound like my reply to feedback was referring to myself.

oneweirdtrick 3 hours ago 1 reply      
Shouldn't eBay have emailed all their customers by now? Why are we learning about this through a blog post?
ChikkaChiChi 2 hours ago 0 replies      
I'm getting tired of sites that limit password length. Microsoft limits you to 16 characters.

Storage is cheap and you shouldn't be skimping on the most sensitive field in your dataset.

kmfrk 4 hours ago 1 reply      
Any way to delete your account?
ericcholis 4 hours ago 1 reply      
Being that important auxiliary details were compromised (name, phone, etc...). Beginning to think that encrypting that information should be more standard. Obviously this leads to trouble if searching by that information is required....
pling 4 hours ago 1 reply      
Considering the situation, its either poor timing or related but I can't change my PayPal password. Get a blank page.

Not confident.

To be honest it takes the piss as they are spamming UK TV with adverts for how secure PayPal is at the moment.

Really wish I never signed up but eBay has a monopoly on the payment types now.

Sami_Lehtinen 2 hours ago 0 replies      
But don't use DuckDuckGo's password generator.http://www.sami-lehtinen.net/blog/random-passwords-using-duc...
askew 4 hours ago 2 replies      
Unfortunately, attempting to reset one's password results in:

> Sorry. We're currently experiencing technical difficulties and are unable to complete the process at this time.

Swamped already?

hpoydar 4 hours ago 0 replies      
Took a trip back to 2002 and visited the Account Settings / Personal Information screen to change my password. No alerts or redirects on login to change credentials. (But evidently an exciting "deal frenzy" is important enough to highlight in all caps and red text in the nav bar). Ok, so the PayPal DB wasn't affected, but does that matter? PayPal account is fully linked up there.
rahimnathwani 5 hours ago 2 replies      
database containing encrypted passwords

Does anyone know whether they used per-user salt?

dodyg 3 hours ago 0 replies      
I would be so fuckin' mad if the passwords aren't hashed.
Theodores 4 hours ago 1 reply      
This is headline top-story news on the BBC right now therefore it must be 'big'. Yet no evidence of anyone making unauthorised access.

We have had a resurgence of 'Snowden' stories in the last few days, so here is a hypothetical scenario: what does a company do if the hackers turn out to be NSA/GCHQ? It is unlikely that they would drop an email to explain that they had just stolen the whole customer database because of some 'al-qaeda' based reasoning, so you would not know it was them. If you suspected it was them then people would wonder if you had taken your meds. If you got the FBI involved then they would tell you it was some script kiddies rather than the Peeping-Tom-Brigade.

Or, if you did know it was the NSA, then you might think that information was safe in their hands and not feel the need to tell the customers.

I look forward to when we get stories where the NSA are explicitly blamed for a data breach instead of some random Chinese hacker, and that emails are sent out saying 'we have been hacked by the NSA again, can you change your passwords please?'. If the NSA crawled out of the darkness to deny the breach then nobody would believe them.

morbius 2 hours ago 0 replies      
I'm so tired of large corporations not taking infosec seriously. This is a shame, in all honesty.
darylfritz 3 hours ago 2 replies      
eBay's password character limit is 20 characters. I use a password manager and detest sites that limit your password length to < 100 characters.
Show HN: My new running app, Vima apple.com
9 points by clarky07  2 hours ago   16 comments top 3
andrewljohnson 2 hours ago 1 reply      
I don't consider this an "MVP" for a running app. The market is too far along, and I think this lacks a certain kernel of innovation that would set it apart in spite of lacking features.

It has a nice, kind of Yahoo-weather like feel to the app. For a commercial success, I think the app does not even embody 1% of the necessary work:

* Without some sort of web+mobile experience, you are way behind everyone else that is established in the space (runkeeper, runtastic, strava, mapmyrun).

* The competitors are well-funded and start-uppy - so, even as you perhaps start to add necessary features like route-making, elevation-correction, etc - they will still be innovating and making new stuff.

* You integrated with Pebble, but the other hardware in the space probably makes more sense (like heart rate watches), so that's more work. Or nutritional stuff like MyFitnessPal.

For the marketing, I'd say:

* The app ends up being a demo because of the restriction on number of runs, while the competitors offer something fully functional, including Vima's paid features, for free. Is this going to work?

* The name is probably bad - take it from the guy who likes greek mythology for names. If it's a running app, you might as well say so. Runtastic, Runkeeper, MapMyRun - there's a theme here.

* I don't see any way I can join the community, so Vima can email me later and follow up to see why I'm not using the app after I downloaded it.

I've thought about making a fitness app, and I have some infrastructure such that I can get to market with a more feature-complete product than Vima, in just a few weeks. It's a big brawl to walk into though.

irongeek 35 minutes ago 0 replies      
Personally I want access to my running data in json or rss. Without that I would not consider using a new app.
clarky07 2 hours ago 2 replies      
OP Here. Feel free to ask any questions. I'd love to get some feedback on it from anyone that runs or bikes etc.

If you're curious, Vima is Greek for "Pace"

When Science Becomes News, The Facts Can Go Up In Smoke npr.org
49 points by jboynyc  2 hours ago   23 comments top 13
garenp 57 minutes ago 2 replies      
Humor aside, it really concerns me that so much of the information in mainstream science news articles can mislead readers.

More worrying is that it's often not possible to persuade someone who is swayed in the wrong direction, because they just don't have the base level of knowledge to allow it.

On this point I almost want to say that every person who graduates from High School ought to have gone through a rigorous class in logic and another in statistics. It's all well and good to say everyone should have "critical thinking" skills, but you can't get there without some pretty solid intellectual tools.

hyperion2010 1 hour ago 2 replies      
In cases like this I think it would be helpful if the reporter (probably with the help of the scientist) laid out all the possible explanations for this finding. In this case it would include 'people with a smaller nucleus accumbens are more likely to smoke pot.' Showing people all the possible interpretations of a scientific result is hard, even for scientists. We often miss interpretations that in retrospect seem obvious given the data. Science news needs to report about the science first and about any human elements second.
devindotcom 1 hour ago 0 replies      
As a reporter I've found that striking a balance between accuracy and reasonable summary can be extremely difficult, and sometimes a headline that has to be less than 45 characters or so ends up being more suggestive than it should. But I hate this kind of story as much as others do, and I'm proud to say I've rejected quite a few stories the editors wanted because of salacious write-ups, and I'm careful to use the language of the study itself or check with the researchers, almost all of whom are happy to discuss their work. We also have a phenomenally well-informed science/space editor where I work, who has been in the press since they were printing the newspaper in the basement of the building. So that helps.

It's worse to have to be the guy who points stuff like this out on Facebook, where you end up sounding like the science equivalent of a grammar nazi - but I've grown to be fine with it, since there's much less room for interpretation in the results of a limited and specific piece of research.

Florin_Andrei 1 hour ago 1 reply      
You're a reporter. Your career is helped if your articles sell a lot. Of course you have an incentive to put a bit of "kick" in the title, or even in the content of the article, if that will help your baseline.

Money-driven outcomes are not always optimal.

jasontsui 23 minutes ago 0 replies      
To some of the older folks on HN - is this a new problem?

The last few years have brought on a whole different type of newsmedia hybrid (the buzzfeeds, huffpos and gawkers) organization that is driven primarily by clicks and do not hold themselves to the standards of traditional print news. While there were dubious options on paper before (Daily Posts, National Enquirers), the internet is far greater venue for propagating bullshit with clickbait headlines. Some of the newer sites I'm seeing people post on Facebook have skipped the truth part altogether, they go straight to fabricating stories. TV has gone the same direction with news-entertainment.

I'm pretty concerned. When its too hard to find signal in all the noise, I'm afraid folks will give up altogether. With Buzzfeed putting out longform articles and NYT putting up quizzes, its already hard to discern who cares about delivering real news and who will do anything for clicks.

But maybe I'm just young (25), and people have always found echo chambers, and yellow journalism is always something we've had to wade through to find the facts. What do you guys think? Has anything actually changed?

jmzbond 1 hour ago 0 replies      
I don't think this problem can be addressed without talking about the fundamental difference in incentives between the groups.

Reporters are incentivized to get the story to sell. Especially as there are more & more freelancers, competition is becoming intense. And let's face it, by nature, we as consumers of information are drawn to the outlandish and sensational. There was an HN a few weeks ago about someone who put out fake crazy headlines and got crazy CTR on Twitter.

Scientists are incentivized to be objective in finding the truth. Scientists avoid making claims about causation until the last possible moment just so they can be sure all the variables have been controlled for and the results are not outliers.

I don't have any well-thought through answers... but thoughts?

buckbova 1 hour ago 2 replies      
I think the point should be, this study should not have been published at all. The sample size was way too small with results open to misinterpretation.

After a scan, I think the Boston Globe article is well written.


The title is "Study finds brain changes in young marijuana users". Maybe it should read "differences" instead of "changes".

SixSigma 1 hour ago 0 replies      
> Lots of people smoke pot. They do so, presumably, because it affects their brains, and not despite that fact. It would be astonishing and inexplicable to find that getting high didn't bring about changes in the brain. But are those changes lasting? Are they permanent? We don't know and we'd like to know.

The study didn't even find that the brains had changed at all, just that they were different.

As the sample was so small, they could just have well concluded that brown hair made a difference or people who prefer broccoli to cheese are more likely to smoke pot.

Vilkku 1 hour ago 0 replies      
StarTalk Radio (hosted by Neil DeGrasse Tyson) has covered this topic together with Miles O' Brien in the two most recent episodes. I'm a fan of the show, but I can imagine some might find it too humorous for their tastes.


EDIT: By "this topic" I meant science reporting, not marijuana.

tom_jones 1 hour ago 0 replies      
I've seen many a publication where the scientist urge restraint in interpreting results, but then the newspapers completely ignore their pleas and go instead to proclaim "Study of 10 subjects proves X could make you immortal!". It's irresponsible journalism.
daveslash 1 hour ago 1 reply      
I use this to try to show people what I mean by "correlation is not causation"


at-fates-hands 1 hour ago 0 replies      
It's quite ironic for NPR to say some news outlets twist facts and headlines to get a point across. Considering the main stream media does this on a daily basis and nobody bats an eye about it.

If you want to talk about misreporting something, you should start there, not a few articles on people casually using weed.

mnw21cam 2 hours ago 0 replies      
Relevant XKCD comic, as required in these situations:


New DuckDuckGo design duckduckgo.com
569 points by ashishk  1 day ago   204 comments top 74
Spittie 1 day ago 2 replies      
That's awesome! I've been using the beta for a while and it's been much better than the old site.

A bunch of ideas/complains:

- It's awesome that you're showing me a nice map when I search for places/address, but let's be honest, I'll probably need to load it into an online map (OSM, MapQuest, Google Maps) to get directions. So a "open in map" button would be great (yes, I can copy/paste the address and !bang it, but it's not exactly a great experience)

- Sometimes I just want to search for images or videos. Yes, I can search "Images X" or "Videos X", but it's not nice. Also you get the minimized image/video box. I'd add two bangs, !i and !v (those right now alias to Google Images and Youtube, which have !gi and !yt anyway) to search for images/video and that will auto-open the images box.

- Auto-suggestions are neat, but please add an option to remove the "select-on-hover" behavior. It's really annoying to casually move the mouse and select something else.

That's mostly it, otherwise I'm really, really happy with DDG. Thanks, and I wonder what the future will reserve!

yegg 1 day ago 4 replies      
Here's the announcement I just posted on our blog: https://duck.co/blog/whatsnew

Thank you to everyone who provided feedback to us during our public beta period! Please keep the feedback coming so we can quickly iterate. We really do listen to it all.

ianstormtaylor 1 day ago 4 replies      
This is a really amazing direction in terms of design. Like most people probably, I've pretty much ignored DDG because it didn't seem to be doing anything more than Google already did, but this design is really interesting for going in a new direction.

The only thing that stands out to me as less useful than the equivalent Google search at this point is the hiearchy of the results. Google uses a link-like blue color for the titles of each result, which seems like a leftover from a past age of the web, but is actually useful for scan-ability because the text of the headers stands out.

Compare the current DuckDuckGo... https://i.cloudup.com/vrwZgUkOty.png

...to Google... https://i.cloudup.com/eFCFEE5TYG.png

...to an adjusted version of DuckDuckGo... https://i.cloudup.com/jluIYZWtzz.png

Having an extra color for the headings lets you scan the page much more easily, which lets you get to the result you wanted faster. The downside is that since their brand color is red, it feels "best" to have the highlight color red. But then that has some negative emotional connotations. Tried green as well, but it didn't stand on it's own enough since there's so little green on the page.

Anyways, I've switched to DDG as my default and will try it out for a while again. I also love those favicons that show up next to the domain names.

Patrick_Devine 2 hours ago 0 replies      
The new design looks pretty slick. I really dig the bootstrappiness of it. I do, however, have a couple of nits. I couldn't figure out how to make the weather in centigrade, so I tried searching for this:


It came up with some interesting results. The images opened automatically for me (not sure why) and were a little off the mark. Ideally there would be a link to switch between Celsius and Fahrenheit, with maybe even a cookie to save your preference, although I don't know if that's very anti-DDG (does DDG store cookies for anything?).Yahoo "solves" this by having you go to weather.yahoo.ca to default to metric. At any rate, given that 95.5% of the world's population uses metric, it'd be a nice feature.

Walkman 1 day ago 5 replies      
Honestly, I don't care how clean or nice the page design is, until it can't give me good results. Here is an example:

The other day, I was searching for a Django core developer's contact. I knew his exact name was Baptiste Mispelon so I searched that directly.

On Google [1] after his Twitter and Github accounts, the first picture is correct, and I did not have to do anything else, the contact infos are there, his picture is there, great.

On DuckDuckGo [2] the picture is not even close, and the first couple of results are not as useful as on Google [1].

I think it is a mistake to concentrate on clean design on a search engine until the searching algorithm is not that good. AFAIK Google's page ranking algorithm is well known, when I were in university I even heard stories that a student (going on the same class as me) reproduced the algorithms only on his own!

TL;DR: I want to search relevant information with a search engine, not to look some nice webpage.

[1]: https://www.google.hu/search?q=Baptiste+Mispelon

[2]: https://duckduckgo.com/?q=Baptiste+Mispelon

k-mcgrady 1 day ago 2 replies      
Instead of putting a large box at the top of some search results with what you think I want, why not put it to the side (the way Google does) and make use of the large amount of waster whitespace. I have tonnes of horizontal space available, not much vertical.
mike-cardwell 1 day ago 1 reply      
I use <alt>d to select the text in my address bar. If I am on a duckduckgo search results page, it seems this keyboard combination is intercepted and I'm bounced off to one of the results (well, the 'd' on it's own does this too). I can also use <ctrl>l, but I've gotten use to using <alt>d.

[edit] I have bug reported this. They have a very good feedback system on their website.

joosters 1 day ago 4 replies      
White text over white images on https://duckduckgo.com/whatsnew - not very readable!

(Edit: How odd; a reload caused the page to be displayed differently, with the images below the text and icons.)

james33 1 day ago 1 reply      
I've never really bought into DDG, especially for its lack of features. It still can't match Google, but this is certainly a step in the right direction and gives me pause to think about using it at least once in a while now. Glad to see progress in search outside of Google for a change.
egfx 1 day ago 1 reply      
Arnor 22 hours ago 1 reply      
I've tried DuckDuckGo a couple times before. Today I decided to give it one day and see if I felt more comfortable with it. I was having a really hard time parsing the results so I did a search side by side in Google and DuckDuckGo. I looked at Google and thought "yeah, I know I want link #3" then I looked over to DuckDuckGo and saw that the same link was result #2 but I couldn't identify it as the page I wanted just by looking at the results page. Further analysis helped me to understand the process I use for parsing search results. It turns out that the most important part is the URL and I've trained myself to look for that in the format Google renders it (right after the link). When I realized that this was what I was actually looking for, it all became much easier.
shmerl 1 day ago 1 reply      
The fonts look messed up for me (Debian testing / Firefox 29.0.1). In some cases letter i has a shifted dot (see the word Wikipedia in the last search result in the image below):


The fonts come from here:

* https://duckduckgo.com/font/ProximaNova-Sbold-webfont.woff

* https://duckduckgo.com/font/ProximaNova-Reg-webfont.woff

gejjaxxita 1 day ago 1 reply      
Small but surprisingly annoying thing about DDG: I have to hit TAB too many times to start cycling through search results, on google one TAB takes me to the first search result, on DDG it's an unintuitive series of links.
edwintorok 1 day ago 1 reply      
What changed since the preview was announced? https://news.ycombinator.com/item?id=7700192The contrast on the main page is still too low.
skizm 1 day ago 1 reply      
Unrelated UI nitpicking: I feel like I should be able to scroll on this page. Just seeing the top of the virtual screen is annoying.
Geekette 20 hours ago 0 replies      
Wow DDG, you guys are on fiyah! I just rebooted Firefox and saw the new new look; love it. What I noticed:

* Someone looking to search immediately may be confused/frustrated as the text entry field is currently not visible until the slideshow ends.

* Consider relocating the "press" button away from bottom right; I almost missed it and only saw it because I'd been on the page for a few minutes, finished the slideshow and was looking for more.

* Also, when I saw that button, I thought it meant "press this to see something cool", so I was disappointed when it only took me to the company press page.

* I really like the background colour scheme on the front page but you might consider switching it off as it doesn't carry over to other pages. I.e I found the visual discontinuity a bit jarring when the search and press pages didn't reflect it; that's when I realized that the biggest message I got unconsciously was that my default DDG pages would now be in this colour (with ability to change it). I see now that the pages depicted on "inner" screen were the usual white, but I honestly didn't see/process that against the bolder background.

dredmorbius 16 hours ago 0 replies      
On the page layout: one very positive sign is that my custom stylesheet appears to make no difference whatsoever to how the page displays. Which means that either the CSS classes have all been changed or my suggestions (recently here on HN) were all adopted.

I noticed the change, and it didn't annoy me much (any change is a bit discombobulating), which is actually high praise. I haven't stumbled into any "woah, that's cool!" features yet (though I'm noticing a few things and nodding appreciatively).

Just checked the "what's new" and I'm pretty much liking.

I'd still love to see time-bounded search provided. That's one of the very few uses that will draw me back to Google for general Web search (Google's special collections: books, scholar, news, etc., may bring me in more often).

I've been using DDG off and on for a couple of years and solidly since last June. It's definitely working for me.

bluthru 1 day ago 0 replies      
Really like the new update, but I still don't like how there is a dead click space between the results, and I find the background hover to be unnecessary.
dredmorbius 16 hours ago 1 reply      
More search results than layout, but as a friend pointed out, "open source office suite" produces notably and significantly different top results in DDG and Google.

Specifically: the DDG results don't rank the arguably top-rated open source offic suite (LibreOffice) at the top of the results page, instead showing an order suspiciously similar to that of Bing. Google (both logged in and out) puts LibreOffice at the top of results, as does StartPage.

Some argue a bias against free software by DDG. I apply Hanlon's razor, but this is one example where improving results would be a bonus.

Screencaps of results:


yalogin 1 day ago 0 replies      
The main thing to me is they still do not have driving directions. That to me is really needed to make it useable to the mainstream public.

Also searching for say chicago, IL does not show the maps tab. We need to search for Chicago IL for that. Not sure why the comma is throwing them off.

clarry 1 day ago 0 replies      
In the old version, the instant answer box would usually load after the results and with some delay. Very often it would materialize the very moment I click on a result, causing the content to move, leaing me to a place I did not want to visit. That was my biggest issue actually.

I can't seem to trigger it now. So I guess it's an improvement.

donbronson 23 hours ago 0 replies      
Adding images makes DuckDuckGo now a legit competitor for Google for my usage. The usability has also dramatically improved as well as load times. Their mobile javascript needs to recognize gesture swiping and other minor UX improvements. But this is a leap forward for them.
frik 1 day ago 1 reply      
Your intro says:

  Smarter Answers  Answers to your questions from the best sources,  developed by our open source community.
Where is the open source repository located? I would like to browse the templates/recipes/sources. Found nothing on http://duckduckhack.com

chimeracoder 1 day ago 2 replies      
I've been using DuckDuckGo as my primary search engine for almost three years.

It's improved fairly steadily in that time (as measured by how often I end up falling back to appending "!g" to my search), but this is the single biggest improvement I can remember in my time as a user.

Aside from the auto-complete (which is nice), it feels significantly faster, and it's also easier to parse visually.

I'm really excited about seeing DuckDuckGo evolve, and it seems more and more people are as well: https://duckduckgo.com/traffic.html

hysan 19 hours ago 0 replies      
I really like the new design, but I'm still hoping for better discovery of bangs. Perhaps DDG could include links to suggested bangs alongside Images and Videos based on the search term. With the final link being a dropdown of all other available bangs (sorted by potential relevance maybe). Another possibility would be to include the list of bangs (or a shortened one) in the pull out side menu. For me, bangs are one of the best features of DDG, and it's disappointing that they aren't more discoverable.
hngiszmo 1 day ago 3 replies      
The first time I saw that (so called) design I literally hit refresh 5 times to hopefully get that missing CSS file. Having all in just light grey and white doesn't really help finding anything quickly and why hide the path of the url onMouseOut is beyond me.

DDG is my search of choice and the pain induced yesterday is not enough to swap back to google but still, not happy at all :(

TheLoneWolfling 1 day ago 3 replies      
Not sure if anyone at DDG would ever read this, but my comments on the preview are still valid.

The contrast is way too low, it prefers vertical over horizontal (I, like any people, have a widescreen monitor. Displaying 3 search results by default is a little absurd), a couple other issues.

It feels like a mobile interface.

Oh, and there's no way to revert to the old version. The options merely change the color scheme, as far as I can tell.

orrsella 1 day ago 2 replies      
Interesting to see that many of their "whatsnew" examples use Yandex[1]. Is that a new partnership?

[1] http://imgur.com/3tBrS7h

cgag 17 hours ago 0 replies      
I've started using ddg instead of consistently skipping it by using g! since the new design came out. I didn't really grok how much the design played into my trust of its results until now.

Big improvement imo.

okbake 1 day ago 0 replies      
I think I found a bug. I'm using the dark theme and customizing the colors. If I set my background color to #000001, all of my text will turn blue (#0202FF).

Also, setting the Header option to Off is the same as On With Scrolling. This is on ff29.

Other than that, I think I'm finally switching over to ddg.

ankurpatel 1 day ago 0 replies      
Good design but disappointing that the search and Menu option disappear when the browser size is shrunk to tablet or mobile phone resolution. Not responsive.
rane 1 day ago 1 reply      
I used DDG as my main search engine instead of Google for two weeks just now, but ended up going back because very often DDG just couldn't find the results I'm used to finding with Google in that amount of keywords.

Usually I had to add "github", "npm" or some other word that would narrow it down for DDG, while Google just knew what I wanted and/or already visited.

Maybe it's the lack of personalized search results or Google is just smarter. Either way non-personalization is a double-edged sword.

lazyjones 1 day ago 1 reply      
Looks good, but they really need to weed out some spammy websites from their index.

For example, all the <domain>.<something>stats.com sites that try to get traffic when people search for various brands, or this strange one: http://www.loginto.org/<domain>-login apparently it tries to steal login credentials, or I don't see the point).

scrabble 1 day ago 1 reply      
Still not totally in love with it, but it's still my primary search engine. While looking for ways to alter the UI, found the Dark theme -- so that was a plus.
cvburgess 1 day ago 1 reply      
The UI is super slick. Bravo!!

I miss some of the simplicity of the old DDG but after adjusting the only thing i find missing is the StackOverflow integration. It may totally be there, i just haven't had the right query yet...

gdonelli 9 hours ago 0 replies      
Nice. first thing I tried was to scroll down.. (I was on my Mac). I think because of the cut iPad(?)/Screen... Was it just me?
blueskin_ 10 hours ago 0 replies      
Horizontal scrolling on a desktop is FAIL.

I also hate the way results have no apparent division between them, not even a prominent title; it makes them all blur together when I am scanning the page.

dvcc 1 day ago 1 reply      
I feel like this hasn't been really tested in Chrome on Windows. The gray, detail information on search results is pretty hard to get past. I kind of just give up using it halfway though, looks like it might be better on other browsers though.
bm1362 1 day ago 0 replies      
When it loaded, it failed to load the CSS etc. I saw the typical white page with black text and thought maybe this was their way of chiding those critical of the redesign.
rakoo 1 day ago 0 replies      
Very good job DuckDuckGo team! I was just thinking that I'd have to switch back to Google because of the poor results... but this new experience has given me some hope.

What saddens me though is that we (as in "the users") still don't have a strong guarantee on the respect of our privacy. We still have to trust the DDG team. I know there is no easy technology to do it, but still, the whole thing is only marginally better than using Google.

tzury 1 day ago 0 replies      
Dear Gabriel Weinberg, after so many posts on HN, I am still missing the point behind DDG.

Can you tell me, the end user, what are other benefits of using DDG aside _privacy_ (given I am using chrome/incognito by default)?

webwanderings 1 day ago 0 replies      
All the more power to competition and diversity of choices. But I see these reinventions and makeover campaigns and I really wonder if things are going well or not.

I use search engines for a niche blog, and I have a need to keyword search certain specific terms which are not common words. I have consistently tested all the available search engines (there aren't many). And I have always arrived at the same conclusion: there is no better search engine out there then what Google maintains.

I am no blind Google lover, but when it comes to practicality of effective and useful products, you have to have the best, in order to make your case.

backwardm 1 day ago 0 replies      
Just switched my default search engine to DuckDuckGo for a self-initiated 10 day trial. All the work you've put into the new layout / results look great.
me_bx 22 hours ago 1 reply      
The "Meanings" feature is a great thing, semantic and ubiquitous at the same time.

It works well with "orange" as in the example, but searching for "Apple" directly shows result for the company without displaying the "Meanings" panel. We can't see the fruits' search results using that term, which is quite disappointing.

It gets more puzzling when you search for "Apples" and are displayed with the meaning tab

try: https://duckduckgo.com/?q=orange vs https://duckduckgo.com/?q=apple

Edit: apart from that this redesign is very pleasant :)

wtbob 16 hours ago 1 reply      
'Sorry, this page requires JavaScript'
Holbein 22 hours ago 0 replies      
I don't like the low contrast and drab grey of the result page. It makes it much harder to jump between results with the eyes.

Luckily, there is a "classic" mode. Please Gabriel, make classic mode the default mode again.

gcd 17 hours ago 0 replies      
I never really gave DDG a shot until now. I tweaked the link color as suggested above to the DDG orange #C9481C (surprised blue was the only option.. had to use custom color and dig into your CSS to find that) and I think I'll give it a shot for at least a week. !bang seems to make up for any deficiencies (I'll probably be using !gm the most, for when I need directions).. right now things are looking great. Keep up the good work!
vohof 10 hours ago 0 replies      
Wish they'd add pronunciation to their definitions https://duckduckgo.com/?q=define+duck
pubby 1 day ago 0 replies      
I hope a setting gets added to make the images and videos tab always display fullscreen results. The default display of only 4 images at a time is pointless to me. Good work otherwise.
fotoblur 21 hours ago 0 replies      
First I'm the founder of Fotoblur.com, a creative photo community. I just went to check out the site. What I'm concerned with is when I search for fotoblur (https://duckduckgo.com/?q=fotoblur), and go to images, it looks like you've slurped the image source and not the source page the image comes from. You're also providing a link to download the image. Don't you have any thoughts for user's copyrights or even content providers of which you've swiped content from? Boooo.
anilmujagic 1 day ago 0 replies      
Is there a way to filter results by time, like on Google? I can't find it.
hrjet 14 hours ago 1 reply      
I like DDG, but have to ask, what is the revenue model? Is it going to serve ads eventually?
izzydata 21 hours ago 0 replies      
This is really neat. I played around with the site awhile back and I found it particularly displeasing due to its layout and design, but now I'm really liking this modern and more minimalist look.
mstade 22 hours ago 0 replies      
I'm loving the new version. I tried switching some time ago, but found the results lacking and the experience just annoying enough to not help me get to where I wanted. Now with this new version it's a whole different ball game. I've been using the beta for a while, and it's just so good .

I'm loving it excellent work!

ixmatus 23 hours ago 0 replies      
Awesome change, results are much improved too, using it as my default.
geekam 1 day ago 1 reply      
How to turn off auto-scroll and turn pagination on?
shmerl 22 hours ago 0 replies      
Is image search new functionality or I just missed it in the older UI?
PaulKeeble 1 day ago 1 reply      
I am a not a big fan of all the results being down the left hand side of the page. Considering how the top fancy gadget thing seems to extend well past the right of my page with silly right arrow buttons it seems a lot of the screen is just being wasted and it would be nice to have the results at least centred.
deathflute 20 hours ago 1 reply      
A question for DDG or anyone who might know: How does DDG plan to monetize this without storing any data?
Thiz 1 day ago 0 replies      
If I could change the name of two great ideas doing great stuff they would be Ubuntu and DuckDuckGo.
nchlswu 1 day ago 1 reply      
Please. There's no need to have the non-descript hamburger icon on a page designed for desktop
s9ix 23 hours ago 0 replies      
This looks pretty awesome! Good to see them doing well.Sad realization: 'what rhymes with orange' did not give a cool response. I expected it to at least try according to smart responses, haha.
wuliwong 22 hours ago 0 replies      
Wow this looks great. Just set it as my default search engine. Thanks Gabe and company!
sergiotapia 1 day ago 0 replies      
Love the recipe search! This is fantastic!
brent_noorda 1 day ago 1 reply      
On my iPhone 4 browser, I don't find any way to close the DuckDuckGo web page. Until I figure that one out, this new DuckDuckGo is YuckYuckNo (ha ha, I made that one up myself, I'm so Ducking funny!
Asla 1 day ago 0 replies      
Very cool duckduckgo.

A question.Where do ddg guys get this massive taste for color red?

Thank you.

FlacidPhil 1 day ago 0 replies      
I love the Forecast.io integration, by far the most beautifully done weather app out there.
api 1 day ago 0 replies      
I tried DDG about six months ago and went back to Google, but I recently tried it again. The gap is closing fast. As of now it's my default search. Google still does a better job seemingly "understanding" queries sometimes, so occasionally I go over there, but I'd say I'm only doing that about 5% of the time.

One of my favorite things about DDG is that I do not have to worry about "search bubbles." I don't have to worry that DDG is profiling me and de-prioritizing results it doesn't "think" I would want to see. I know Google thinks search bubbles are a feature but I think they're a bug. I don't want some algorithm trying to reinforce cognitive biases for me so I don't experience the shock of a dissenting opinion. I've observed a few times that DDG seems to do a better job finding really obscure things, and I've wondered if this might somehow be related to profiling algorithms or lack thereof.

I also find the level of data mining Google (and Facebook) engage in to be creepy, invasive, and to hold a high potential for abuse. I'm certainly open to alternatives whose business model does not revolve around that kind of intrusive personal profiling. I'm aware that DDG does have an ad-and-analytics business model, but they seem to be taking the high road with it.

Prediction: "privacy is dead" will in the future be regarded as an idea that greatly harmed several multi-billion-dollar companies. I think it's firmly in the realm of utter crackpot nonsense, and anyone who thinks this is either hopelessly naive or delusional about the political, social, and economic realities of the world. A full-blown user revolt is underway.

finalight 9 hours ago 0 replies      
why duckduckgo instead of google?
idealform01 23 hours ago 0 replies      
doh! I kept trying to scroll down the page to see the rest of the image that looks like 1/3rd is cut off
higherpurpose 1 day ago 0 replies      
It seems to cause some problems with the WOT extension?
whoismua 1 day ago 0 replies      
DDG is my default SE. Once in a while i ave to go to other SE (Bing second, Google third) but it's a small price to pay to give them a shot.

Hopefully the market share will be more evenly distributed among SEs. Let's do our part

oldgun 15 hours ago 0 replies      
Looks good.
newbrict 23 hours ago 0 replies      
since when does noch rhyme with duck
       cached 21 May 2014 19:02:01 GMT