It stops the drunk text-transform, and also doesn't have to worry about hiding the text from the channel you're trying to talk into.
Overall, Blizzard has been very good about adapting to what their addon community is trying to do. They add official support for hacks if they like what the addon does for the game, and deliberately break some if they don't like its effect.
Not sure if this would have any value, but I'm sure someone enterprising could find a way to exploit others trust. That's not a new concept.
 (This of course assumes that the only lisped-up content is the usernames, not the whole syntax, which I think is an acceptable assumption given that it's fully out of date anyway)
Reading this, summarizes 90% of the tips and tricks that I learned. Glad you HNers enjoy it!
Interesting. I'm curious if this is always the case near borders and in airports. NSA/CSEC leaks have shown they are vacuuming up all cross-border roaming alert SMS texts.
So, was it a MITM BTS?
I had to write the scripts to implement this concept myself and it wasn't a quick and easy task. It would had gone along a lot easier if I was able to abstract away some of those queries with a tool like this.
Why can't it just be a matter of removing the geoip checks from your servers?
(I know, I know, still... the internet was not meant to be this way.)
Germany, France, Austria, Switzerland, Belgium and Luxembourg
Actually Netflix is already based in Luxembourg for its European activities and there have been news that they're going to move to Netherlands: http://www.wort.lu/en/business/move-to-the-netherlands-netfl...
I absolutely love this mindset from Netflix. Every conf talk, article I hear from them they talk about stuff that didn't work, that did work for a while but didn't scale. Then show what worked better.
Can anyone explain why this is? I hate watching dubbed video, especially when it's live action.
How does this have to do with climate? It seems like your time scale doesn't overlap with typical climate models. I guess it feels like a speculative claim (better pressure measurements -> fine-grained model improvements at small temporal scales -> better climate model outcomes).
Also, I wonder if you have a link to a paper or presentation that details how these measurements could fit in with the assimilation models that are used in weather forecasting? I see a link to Cliff Mass' blog (as a whole), but I'm more interested in a specific reference. In particular, I wonder if it's possible to quantify how much a perfectly-accurate ground-level pressure field could constrain upper atmosphere dynamics. Has there been a session at AGU (http://fallmeeting.agu.org/2014/), for example, examining how this could work? Or is it too new for this yet?
Do you have any examples/links to research being done with the data currently? I poked through the blog quickly and didn't see anything in depth.
I'm a user and have been negatively impacted by the feed fetching optimizations - daily feeds are often a few days behind and come in bunches. Two examples:
- Penny Arcade updates its comics Monday, Wednesday, and Friday, always at 7:01AM UTC, and then news other times during the week. It's Wednesday at 4:25PM UTC - 9 hours after - and goread hasn't picked it up.
- Dinosaur Comics is updated weekdays. I'll eventually get all of them, but usually two or three at a time. For example, yesterday I marked all my feeds as read; today, I have entries from Monday and Tuesday, but not from Wednesday.
I had hoped that the move to the everyone-pays model would give you the resources (either developer or quota) to fix these issues, but they've gotten no better or maybe worse.
I haven't looked at what you're doing, but I believe Google Reader used pubsubhubbub where available to reduce/eliminate polling for many popular feeds.
I honestly didn't have a great experience with my last bug report, so I haven't tried again.
You could charge users a fee per feed proportional to server cost (e.g., frequency of posts) and inversely proportional to the number of subscribers.
* Unpopular/infrequent feeds (like my friends' blogs) would be free
* Popular/infrequent and unpopular/frequent feeds would be cheap, maybe $1/year/user/feed
* Popular/frequent feeds would cost more, maybe $10/year/user/feed
This way you can peg your income to an exact multiple of your costs.
"30-day trial: This action cost me about 90% of my users. Many were angry and cursed at me on twitter. I agree that it is sad I did not say I was going to charge from the beginning, but I didn't know that I would be paying hundreds of dollars per month either."
If you decide to charge for it, you're a greedy bastard, instead if it's free, they say "if you aren't paying, you are the product". Other complain that the product doesn't work, when instead it's a case of PEBCAK. When it's not, it means you're going to say goodbye to a couple of night's sleep or a weekend or too, or maybe it's ONE feature away from being perfect (again).
...sometimes I hate people :(
I've experienced this myself, and I'm hearing it more and more from others. Maybe this is a market need that is going unfulfilled.
It's always surprising to me when devs are surprised by outrage at the change/removal of a free product.There is a non-negligible cost for a user to research/choose/setup/learn a new tool. In this case, a feed reader has favorited articles, read/unread state of articles, etc. When you pull the rug out from users that have made that investment who now have to start over, they are going to be mad, regardless of what they paid.
Without knowing what your server load looks like, I would imagine you could save a couple hundred dollars a month in hosting, which would go right to your bottom line profit. A couple hundred dollars a month isn't huge, but at this point in your business that's say $2,400 a year. From the looks of it, that's at least 2-3 months worth of revenue or almost 5 months worth of profit.
I think it's at least worth considering with where your project is at right now.
I'm not convinced that being locked in to one cloud provider's services and APIs is healthy long term - it means you are locked in to that ecosystem and it's harder to consider alternatives, even if your needs are quite straightforward, so you can end up in a situation where you're paying hundreds of dollars a month for hosting when you don't need to be.
Ruben Gamez of Bidsketch had a similar story about switching from freemium to paid only -
"A simple rule in computers is to make something run faster, have it do less work. I remember reading about how grep works quickly. Instead of splitting a file by lines and then searching for the string, it searches for the string, then finds the newlines on either side. Thus, if a file has no matches, it won't ever have to do the work for line splitting. Done the naive way, it would always split by line even if there was no match. Do less work."
Good observation, but I doubt if that's remotely even the reason why grep works quickly.
This is especially interesting to me. My side project http://www.longboxed.com was recently launched to a modicum of regular users (~300). My app runs on Heroku on the free tier with the 'Hobby Basic' level of the Heroku Postgres database. All told it costs me ~9 dollars a month. No big deal.
However, if I ever stepped the site up to another tier I'd be looking at ~50 bucks for the database and ~36 bucks for another process instance. These expenses can add up fast for a site that doesn't currently generate any money.
Anyway - it is nice to see examples of introducing a pay model into your app after it has launched.
Thank you for such a lovely product. Someday I would like to contribute to your project(s) more.
btw: Lots of typos in the article, be sure to spell check !
: https://news.ycombinator.com/item?id=7402393: https://news.ycombinator.com/item?id=7408089
Already thinking about the apps that will use this! Thank you.
"Tricks of the Mind" by Derren Brown also has some basic information on memorization, along with a host of other interesting topics.
I tried some memorizing techniques. It's a great feeling to improve 10x at a (specific) memorizing tasks in very little and effort time by using one weird trick.
I used to try to remember random words during my commute and recall them when coming back. I took a list of random words from /usr/dict and set myself a timer of a few minutes to remember 30 words using memory palace. I used various places from the commute itself to store words. It was fun to see the words pop-up automatically in specific places on my way back. My performance declined after a while because I wasn't able to erase the previous words.
When I learn more Haskell, I may succeed to figuring out what's wrong.
Until the Fed is dealt with and the financial sector is subject to true market forces, it will continue to dominate the economy and produce destructive malinvestment.
Really? Isn't this what the bigger half of the book is about? It's because of rentier/patrimonial-capitalism, opportunity not being equally distributed in the system, etc.
Alas, I am not optimistic. People have suffered for generations under horrific conditions. No reason it can't happen here.
Or to be more precise: No reason it can't happen here again. Don't forget slavery and Native Americans.
So this was done as a marketing stunt, just a long time ago.
Anyway, this is one of the first things I learned to say and spell when I first moved to the UK. The full name is Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch, although read the page to see the history.
Curious, yes, but ... ?
Anyone who's interested, I will later this year be giving some math workshops and a lecture later this year in Bangor. I can probably get invites for guests if anyone is interested.
I played a lot in college, but never really got "good" per se. We had a group of about 20 or so of us who played semi-regularly. There were two players who were both far superior to everyone else, but they didn't always win. Just like everyone else, they naturally suffered the turmoil of a bad beat here and there. But aggregate their winnings over the course of the two years we all played together, they came out far ahead of everyone else.
And I think that's probably the main point here. These two knew how to grind out the bad beats and come out on top over a long enough time frame.
I also have a good friend who crabs in Alaska (his dad and him are on the Deadliest catch together) - each season is a "gamble" with how long it will take to catch enough crab to meet quota, but it's not like they're putting money into a slot machine and hoping for the best. There's a system that leads to success. Sometimes they get "lucky" and their season ends quick, others it's more of a lengthened grind.
They seem awfully similar.
Isn't inducing the player with the best hand to fold before showdown the same as a hand decided without a show of cards? Or do they mean that 85.2% of hands were decided without seeing the flop? Both don't sound right.
Edit: Will downvoters explain their reasoning? Btw, someone else said something about how poker is a game of luck at low skill level and game of chance at high skill level but that's not really how it works. Poker is a game of skill if you're playing against bad players because the game is consistently beatable with sufficient skills. The better and more sound your opponents become, not just relative to you but on an absolute basis, the more important the role of luck becomes.
Some games still involve more luck than skill--blackjack, for instance.
Other games involve more skill than luck, and poker certainly falls into that category.
Poker is a game of skill with an element of luck. It is not a game of chance.
It is worth noting that the other 14.8% of hands, where a showdown is reached, are typically the ones where (much) larger amounts of money are exchanged.
So, betting decisions might decide the outcome of a majority of hands, but it's unlikely this corresponds to a majority of the money flow.
Just like life: the harder you work, the luckier you get.
Dan Harrington made the final table in 1987 (152 entrants), won the final table in 1995 (273 entrants), and made back-to-back final tables in 2003-2004 (839 and 2576 entrants). Or in a lesser example, Johnny Chan won back-to-back in 87-88 (152 and 167 entrants).
The Cavs winning the #1 pick three times in the last five years, however. That is rigged.
It does appear that serious poker players do better in the long run as they not only consider pot-odds when playing but also management of their bankroll to ensure long-term survival. Is there an element of chance? Of course, but it's not the only factor - few things are black and white.
But if someone choose to go all-in blind pre-flop then they are choosing to play the game as pure gambling and their version of the game would be pure chance.
But again, it's all about the circumstances. Poker is often compared to day trading in the markets and though it's not a perfect comparison, there are some analogies. For examples, options can and often are used in complex strategies to either reduce risk or enhance returns. But that doesn't mean they can't be used to speculate wildly - in these situations, the potential payoff profile begins to look something like a straight up bet.
This stifles innovation in real-money games. No company will risk spending any money on the design/development of a new real-money game when there is a tangible chance of going to jail and losing everything. This frustrates me a lot, because I enjoy real-money games (poker included), and I'd love to play new kinds of real-money games, but these freedoms don't exist.
There are on occasions patterns in Roulette, especially with certain dealers. In addition, there is a little trick casinos sometimes employ. They loosen the legs on the table to the point it wobbles slightly. The dealer will then lean against the table and watch as the ball is about to drop. They can then pop the ball out of an undesired slot by a well placed shove of the hip. Just something to watch out for if you are gambler.
I firmly believe just like in computing, the entire concept of "random" depends mostly on ones frame of reference.
There are other words in both examples that I personally would not use as tags, but I can't really say they would be universally not-useful. I think a vast improvement could be made just by having a dictionary blacklist filled with things like these - from this tiny sampling contractions seem to be a big loser.
I have been doing some research towards automatic tagging lately, and I found several Python project coming close to this goal : https://pypi.python.org/pypi/topia.termextract/ , https://github.com/aneesha/RAKE , https://github.com/ednapiranha/auto-tagify
but none of them is satisfying, whereas Algorithmic Tagging of HN looks pretty good.
I have been trying to implement a similar feature for http://reSRC.io, to automagically tag articles for easy retrieval through the tag search engine.
Here is the trained topic model (Nov. 30, 2012) with only 40 topics (for file-size mainly) https://dl.dropboxusercontent.com/u/14035465/hn40_lemmatized...
You can load it with Python:
from gensim.models import ldamodel lda = ldamodel.LdaModel.load("hn40_lemmatized.ldamodel") lda.alpha = [lda.alpha for _ in range(40)] # because there was a change since 2012 lda.show_topics()
"Erlang and code style" process erlang undefined file write data true code
But doesn't the auto-tagging feature make to much noise for a business use-case? For example, it tags a article of Amazon and includes Google in the tags. White-listing words wouldn't fix this (Google is a whitelisted word if Amazon is).
I don't know about LDA though. Perhaps a proper tag administration would fix this, but then you'd have to remove tags on the go.
PS. Screenshot included + it's already in alpha in a company with 100 users.
Some of these points are related to encouraging users to tag content, but auto-tagging also seems problematic.
To me something more along the lines of entity extraction is more useful because it is a well defined problem, and can be used to improve a lot of other applications.
> stream rotate type/page font structparents endobj obj endstream
This could be really useful in ecommerce for creating search keywords for category pages. The noise in the results matters not, so long as it gets 'T-Shirt' and someone searches for 'T-shirt' then all is well and good.
Are you looking to plug what you have into something such as the Magento e-commerce platform? The right clients could pay proper money for this functionality. It is something I would quite like to speak to you about.
after a while the system converges to a very useful structure and new members can see correctly tagged articles and the system learns their interests by itself
do you know anything like this already existing?
It looks pretty neat, though I'm wondering how you'll make sure the content stays up to date. Manually crawling or relying on the community to post them might let you down.
I don't know if it's that companies don't want to post their security positions or if the existing sites are poorly equipped to meet that specific need, but it was just terrible. Did I end up at the place I wanted for the salary I wanted? Sure. Did I end up at the best place I was capable of working at the highest salary I am capable of earning? I doubt it, because, according to Monster, CareerBuilder, and Dice.com, there are no information security positions within 50 miles of Pittsburgh and there haven't been for at least the past six months.
Minor nit, your blog header doesn't link back to the main site.
While this sounds fine in principle, it soon gets messy. A configuration change may require everything to be rebuilt. But if you rename or delete a source file it didn't track how that ended up in the output directory so you end up with a slow accumulation of crud. It turns out to be fairly easy to confuse all the incremental logic and end up with messy builds. Add poor decisions like using the datestamp of items in the output directory to build the sitemap (they don't set output directory datestamp to that of the input items) and you get important failings.
In my opinion, incremental builds should give the same results as full clean builds or you have a fragile non-repeatable build - something that is very undesirable. I wrote a bit more at http://www.rogerbinns.com/blog/on-nikola.html
I'm currently trying out Pelican just because I'm comfortable with Python and Jinja2 already, but I really have no idea if it's actually the best choice for the kind of site I'm trying to build.
e.g. with archive pages like /<author>/<lang>/<yyyy>/<mm>/and category pages like /<lang>/<cat>/<subcat>/
I fail to see any. I've tried to configure many of the available generators, but no one seem to have the same itch I have.
(Disclaimer: I'm biased, I wrote one).
On the home page - okay, if you need to do so. But why, just why on content subpages? Do you really believe that your purple banner with Roboto is that incredibly beautiful that I want to just stare at it for a minute before scrolling down to the meat, each single time that I click on a link?
"The computer industry is the only industry that's more fashion-driven than women's fashion." Damn, Ellison was so right about that.
Don't get me wrong, I am actually a typographer, and I am the last person on earth who wouldn't value appropriate white space. But ... seriously, find the right balance!
I suspect there are other generators on staticgen.com that also use consolidate.js and suffer the same misrepresentation. (I just don't know the other generators well enough to know off the top of my head which ones use consolidate.js.)
Of course, an error such as this makes me doubt all the other information presented on staticgen.com
Splitting hairs maybe...
Its going to be a long conversation :-)
The justification for peering is not equal traffic, it's equal value - my customer wants to communicate with your customer. Regardless of the direction of traffic, the traffic is equally valuable to both of us because the traffic is the primary thing our customers are paying us for.
Unless, of course, I can get you to pay me for it anyway because of some unrelated advantage such as the fact that your customers can leave you more easily than mine can leave me. Comcast and others are attempting to leverage exactly that - in many regions they have no viable competition whereas Netflix and L3 are much more replaceable in their respective markets. This is a prime example of abuse of a monopoly.
But what does any of that have to do with mandated peering requirements at the NSFnet exchanges? Who would enforce that, and why, when any two major networks can set up peering at any number of meet-me rooms? Requiring that an ISP peer as much traffic as is available or not peer any at all seems ridiculous; some ISPs will suck more than others, but that's the problem of them and their customers, not a problem for the entire Internet.
Meanwhile, I'm surprised there aren't more startups and VCs looking to bet that "new ISP that doesn't suck" is a viable business model. People are chomping at the bit for Google Fiber, which seems unlikely to grow to a national level without developing competitors. This is a space with very few competitors, and there hasn't been serious competition in that space since DSL stopped being a viable option.
The problem is that in many many markets, Comcast (or another ISP) are pretty much the only choice. Customers don't have another option, no matter how much Comcast underfunds it's peering infrastructure or gets thrown out of peering exchange points.
So what is the consequence to Comcat for underfunding? What is the consequence to Comcast for even such a disastrous outcome as getting kicked out of the peering exchange point? Not a lot.
I'm not sure what the solution is, but 'regulate them as a common carrier' is certainly part of it, since they are a monopoly, and the common carrier regulatory regime was invented for exactly such a monopoly.
The solution is to make it 'capitalistic'. Change all of our internet contracts from Unlimited (up to 'x' GB/month), to a simple $/gb cost.
It would be in the ISPs best interest to provide their customers the fastest internet connection as possible. E.g, if a customer can stream a 4k video vs SD then the ISP would make more money per unit time.
Think of it this way, if comcast charges $0.25/GB, and a netflix SD show is say 1GB and HD is 4GB, then comcast grosses $1 for HD and $.25 for SD for the same customer streaming request.
Over time its likely the price per GB would decrease, just like it has for cellular.
On a more evil side, this would also stop chord cutters. Pirating content is no longer 'free', and Netflix would cost significantly more than $10/month ($10/month + 'x'GB * $/GB).
As for what rates to expect, if comcast charges in ATL $30-55 for 300GB, that'd be about $.10/GB to $.20/GB. As for speed tiers in a $/gb system, your guess is as good as mine.
> The solution to this problem is simple: peering at the original NSFnet exchange points should be forever free and if one participant starts to consistently clip data and doesnt do anything about it, they should be thrown out of the exchange point.
I do have a couple questions though - who is in charge of the original NSFnet exchange points, and do they have this authority?
There is one further step, whereby the prices of the incumbent monopoly are regulated in areas where no competition exists. Ironically this works in the opposite way to how you'd think, as it forces the incumbent NOT to offer their lowest prices in that market - the intention being to make monopoly areas prime targets for competition and to ensure that potential competitors aren't scared out of the area by predatory pricing.
It's an odd system with good and bad on both sides, but it seems a lot better than being stuck with a single source of Internet access.
Obviously, there's no free market in the status quo: we (consumers) basically expect to pay a low, ever-declining price for bandwidth, while someone else eats the costs of a growing network infrastructure. There's an economic disconnect, and legislating that it shouldn't exist seems worse than futile.
I say: pass the costs on to the consumer, and break down the monopolies on last-mile cable service. If the cable companies had to compete for subscribers, they could still pass on the costs of improving their infrastructure, but they'd have to compete with everyone else to do it.
In other words, the problem here isn't "net neutrality" -- it's that we've got a monopoly at the last mile that we need to destroy.
That's some big hand waving, because laying the cable costs a fortune, and takes many years to recoup the cost which is why there is so few are competing for this "super profitable" business.
The current situation is that Comcast doesn't have the equipment/resources to handle extra internet traffic at its peers. Most people want Comcast to buy more stuff to handle it, why don't we think the opposite way- get Comcast to decrease its amount of traffic?
If we can get Comcast to consume less traffic, they wouldn't have to complain to other peers about load asymmetry.
The best way to decrease traffic? Make Comcast has less customers.
Why does Comcast has so many customers, even though their resources cannot handle it? Because they have a government mandated monopoly in the last mile, so they are forced to have more customers than what they can handle.
We can come to a conclusion that last-mile monopoly -> network congestion -> forcing L3 to pay for peer.
If Comcast has to compete with other ISPs for last miles, the traffic load would shift from 1 single entity (Comcast) to 10+ smaller ISPs. In such case the traffic load problem would not exist.
Another solution is to breakup Comcast.
See? This is a perfect opportunity. Comcast can has its multi-tier network, but at the price of the last mile monoploy. After all, if they want to have the right to choose peers, we customers should also have the right to choose ISPs.
The Amsterdam Internet Exchange is the largest and most important exchange in Europe, and it's peak traffic each day reaches 3 terabits/sec.
I work for a european ISP and the problem we have is the location of the peering. Big content providers will happilly peer with you in, say, Palo Alto or Miami; but they will refuse to add a peering connection in Europe. Why? because today the problem is about WHO pays the Intercontinental route (which limited and is expensive bandwidth).
Level3 is known in the industry as a pioneer for bit-mile-peering agreements. This means you have to sample the origin and destination of the IP packets and make some calculations to know how many miles the packet has traveled and pay / get paid if someone dumps long haul traffic to a peer. Getting to this is complicated with current tecnhology and many companies are refusing to peer with Level3 because they don't know what will happen with their business with bit-mile-peering agreements.
Maybe Netflix could find some creative uses for all that idle viewer upload capacity to reduce the deficit ;)
- Have every Netflix client cache and serve chunks of the most popular streams P2P-style. You could have a DHT algorithm for discovering chunks or have Netflix's own servers orchestrate peer discovery in a clever way, for example by only connecting Comcast customers to peers physically outside of Comcast's own network. This would reduce Netflix's downstream traffic and increase viewer uploads.
- Introduce the Netflix-Feline-Image-KeepAlive-Protocol, whereby every Netflix client on detecting a Comcast network uploads a 5MB PNG of a cat to Netflix's servers over and over again while you're watching a video. Strictly for connection quality control purposes of course.
Everyone shows loss aversion, and so will be determined to find out why being on Comcast gets them penalised. They will learn about its dick moves, and complain to Comcast to make them remove these fees so they can access Netflix, which they have already paid for access to.
Hardly. We've experienced the whole interconnect brinkmanship locally too (South Africa). Its actually quite the opposite - the interconnect things are a lot nastier in other countries because it tends to be paid for (powerful co vs underdog) whilst the bigger US setups seem to run mostly open peering.
Cringely argues that cable breaks even and money is made on the net, but that's an artificial distinction. What if cable disappeared? Would they still make money if they had to pay for the upkeep of the network with only Internet fees? The desperation and risk of this game of chicken convinces me that the answer might be "not much." The loss of cable might very well be apocalyptic for these companies, at least from a shareholder value and quarterly growth point of view.
What's happening is very clear to me: the ISPs are trying to either harm the Internet to defend cable or collect tolls on streaming to attempt to replace cable revenue. That's because cable is dying a slow death. This is all about saving cable.
The fundamental problem is that cable ISPs have an economic conflict of interest. They are horse equipment vendors that got into the gas station business, but now the car is driving out the horse and their bread and butter is at stake.
And on the other side is the fat-cat vc funded video content providers, who don't want to pay for the their mp4-based saturation of all the pipes.
This is a negotiation. There are two active media campaigns that are trying to gin up our anger against The Other Guy (tm) as part of their negotiations. I just can't get invested in this nonsense.
2. rise of ad-hoc local networksThis might come out of mobiles, this might be me dreaming, and it might come with sensible home router designs, but ultimately most of the traffic I care about probably originates within 2 miles of my house - my kids school, traffic news, friends etc
A local network based on video comms - that will never happen. just like mobile phones.
3. electricity and investmentIn the end this is down to government investment. Let's not kid ourselves, gas, water, electricity, railroads, once they passed some threshold of nice to have into competitive disadvantage not to have, governments step in with either the cash or the big sticks.
Fibre to the home, massive investment in software engineering as a form of literacy, these are the keys to the infrastructure of the 21C and it's a must have for the big economies, and it's a force multiplier for the small.
What is the source of the notion that, because you paid for your consumer broadband, all bits are paid for and the charge for carrying them cannot be split with the other side of the connection? Why is it so bizarre that both sides of the connection have to pay for it? Because you're used to your phone working differently?
As an analogy, you know how you used to pay for a subscription to a magazine and there were ads inside which advertisers (the other side of the connection via the magazine in this case) also paid for? The magazine split its fee in two: you paid part of it, and the advertisers paid the other part. It's the same here.
There is nothing fundamentally wrong with charging both sides. You may prefer a different fee structure but a better argument than "I already paid for it!" is necessary.
Right now i've only tested it with a local Ghost installation but theorically it should work with every blogging platform since it uses wget to fetch pages. Since it's basically a very silly and dumb shell script i didn't want to use something platform specific such as grunt / gulp / rake / whatever.
Feel free to give me your feedback about it (Bugs, Features, etc.)
THANK YOU THANK YOU THANK YOU!
Currently I'm using a local Wordpress install and a static site plugin and I'm getting kinda fed up with the gotchas. (Detail and links at http://www.oddevan.com/about ) If this does what you say it does, not only can I automate the deploy process, I can move over to Ghost (which I'm already using for its superior Markdown editor).
This turns out to be a pretty difficult thing to build well. Every pentester who can code that owns a Mac wants to rewrite Burp; I myself have died on that hill several times. But rewriting Burp is a bit like rewriting Microsoft Word; so much of the value is in the details, and there are so. many. details.
As an aside, IMO claiming that Charles isn't native is a little disingenuous. Yes it's built with Java, but I wouldn't dismiss it as non-native (unless you're using native in the purest of ways, meaning that its "natural" - e.g. Cocoa - to the system). I consider tools like Eclipse and IntelliJ "native" even though their UI may be poor compared to Xcode.
I know everyone is asking the same kins of question but how does it compare to fiddler?
Our use case is kind of simple. We are building APIs in top of websites that don't have APIs. So a lot of "spying" is required. We have to use fiddler because it was the only one to correclty handle flash forms, file upload forms and to be able to globaly search for a particular token in every requests has been so much of a life saver.
Burp was a pain to setup (OS X makes installing and using java from the commmand-line a ridiculously complicated process) but it looks 1000x more useful than Proxy.app is.
On a marketing note, you might want to think about who your market is and what job they use a proxy for. It doesn't seem to offer anything for a security researcher but maybe there's enough there for a web developer.
When you have a lot of tabs open, there is always pollution that get added to what actually matters.
The big story is that ebay leaked personally identifiable information. Naturally this is buried four paragraphs down.
The database, which was compromised between late February and early March, included eBay customers name, encrypted password, email address, physical address, phone number and date of birth.
Tell me to brace for an inevitable wave of phishing and identity attacks.
Tell me that bad guys will try to steal my other online accounts with this information.
Tell me to trust no one because bad guys now look legit with my home address, phone number and DOB.
Pro tip: put the real story in the headline. That's also a "best practice".
Seems rather prescient now. Their incompetence has just cost us all our personal information.
Ebay being hacked kind of scares the hell out of me because PayPal has my checking account information with direct access to withdraw funds. A hacker could rob me blind. Like seriously the owner of PayPal should not be telling me this "we have no evidence of" bullshit because there's no alternative to PayPal that online stores actually use and changing your checking account number and routing number is very very painful. You have to get new checks, you lose checking history. Fuck.
PayPal went full retard. The security confirmation question?
Please supply your full credit card number ending in ####.
Um, that's the information I'm trying to protect in the first place.
edit: sorry about the "full retard" - trying to quote from Tropic Thunder/RDJ. did not mean to offend
So, just my entire identity then? eBay really seem to be down-playing the severity of this.
Week 2: "We have observed some limited and negligible instances of credit card information being compromised that coincidentally happened to be linked to eBay accounts. We consider this purely coincidental and feel it is no cause for concern."
Week 3: "Oh god they took everything."
This bothers me. No one cares how many employee logins were stolen. It only takes one to cause a huge amount of damage. Is anyone reading this thinking "oh, it's okay, they didn't take too many employee logins"?
Edit: I can now paste on eBay (not sure what went wrong the first time) but PayPal is still actively preventing pasting a new password.
Has anyone else heard about eBay doing this? I have no way to edit it back to the way it was from what I can tell. It's infuriating -- they changed the word "Buyer" to "Seller" to make it sound like my reply to feedback was referring to myself.
Storage is cheap and you shouldn't be skimping on the most sensitive field in your dataset.
To be honest it takes the piss as they are spamming UK TV with adverts for how secure PayPal is at the moment.
Really wish I never signed up but eBay has a monopoly on the payment types now.
> Sorry. We're currently experiencing technical difficulties and are unable to complete the process at this time.
Does anyone know whether they used per-user salt?
We have had a resurgence of 'Snowden' stories in the last few days, so here is a hypothetical scenario: what does a company do if the hackers turn out to be NSA/GCHQ? It is unlikely that they would drop an email to explain that they had just stolen the whole customer database because of some 'al-qaeda' based reasoning, so you would not know it was them. If you suspected it was them then people would wonder if you had taken your meds. If you got the FBI involved then they would tell you it was some script kiddies rather than the Peeping-Tom-Brigade.
Or, if you did know it was the NSA, then you might think that information was safe in their hands and not feel the need to tell the customers.
I look forward to when we get stories where the NSA are explicitly blamed for a data breach instead of some random Chinese hacker, and that emails are sent out saying 'we have been hacked by the NSA again, can you change your passwords please?'. If the NSA crawled out of the darkness to deny the breach then nobody would believe them.
It has a nice, kind of Yahoo-weather like feel to the app. For a commercial success, I think the app does not even embody 1% of the necessary work:
* Without some sort of web+mobile experience, you are way behind everyone else that is established in the space (runkeeper, runtastic, strava, mapmyrun).
* The competitors are well-funded and start-uppy - so, even as you perhaps start to add necessary features like route-making, elevation-correction, etc - they will still be innovating and making new stuff.
* You integrated with Pebble, but the other hardware in the space probably makes more sense (like heart rate watches), so that's more work. Or nutritional stuff like MyFitnessPal.
For the marketing, I'd say:
* The app ends up being a demo because of the restriction on number of runs, while the competitors offer something fully functional, including Vima's paid features, for free. Is this going to work?
* The name is probably bad - take it from the guy who likes greek mythology for names. If it's a running app, you might as well say so. Runtastic, Runkeeper, MapMyRun - there's a theme here.
* I don't see any way I can join the community, so Vima can email me later and follow up to see why I'm not using the app after I downloaded it.
I've thought about making a fitness app, and I have some infrastructure such that I can get to market with a more feature-complete product than Vima, in just a few weeks. It's a big brawl to walk into though.
If you're curious, Vima is Greek for "Pace"
More worrying is that it's often not possible to persuade someone who is swayed in the wrong direction, because they just don't have the base level of knowledge to allow it.
On this point I almost want to say that every person who graduates from High School ought to have gone through a rigorous class in logic and another in statistics. It's all well and good to say everyone should have "critical thinking" skills, but you can't get there without some pretty solid intellectual tools.
It's worse to have to be the guy who points stuff like this out on Facebook, where you end up sounding like the science equivalent of a grammar nazi - but I've grown to be fine with it, since there's much less room for interpretation in the results of a limited and specific piece of research.
Money-driven outcomes are not always optimal.
The last few years have brought on a whole different type of newsmedia hybrid (the buzzfeeds, huffpos and gawkers) organization that is driven primarily by clicks and do not hold themselves to the standards of traditional print news. While there were dubious options on paper before (Daily Posts, National Enquirers), the internet is far greater venue for propagating bullshit with clickbait headlines. Some of the newer sites I'm seeing people post on Facebook have skipped the truth part altogether, they go straight to fabricating stories. TV has gone the same direction with news-entertainment.
I'm pretty concerned. When its too hard to find signal in all the noise, I'm afraid folks will give up altogether. With Buzzfeed putting out longform articles and NYT putting up quizzes, its already hard to discern who cares about delivering real news and who will do anything for clicks.
But maybe I'm just young (25), and people have always found echo chambers, and yellow journalism is always something we've had to wade through to find the facts. What do you guys think? Has anything actually changed?
Reporters are incentivized to get the story to sell. Especially as there are more & more freelancers, competition is becoming intense. And let's face it, by nature, we as consumers of information are drawn to the outlandish and sensational. There was an HN a few weeks ago about someone who put out fake crazy headlines and got crazy CTR on Twitter.
Scientists are incentivized to be objective in finding the truth. Scientists avoid making claims about causation until the last possible moment just so they can be sure all the variables have been controlled for and the results are not outliers.
I don't have any well-thought through answers... but thoughts?
After a scan, I think the Boston Globe article is well written.
The title is "Study finds brain changes in young marijuana users". Maybe it should read "differences" instead of "changes".
The study didn't even find that the brains had changed at all, just that they were different.
As the sample was so small, they could just have well concluded that brown hair made a difference or people who prefer broccoli to cheese are more likely to smoke pot.
EDIT: By "this topic" I meant science reporting, not marijuana.
If you want to talk about misreporting something, you should start there, not a few articles on people casually using weed.
A bunch of ideas/complains:
- It's awesome that you're showing me a nice map when I search for places/address, but let's be honest, I'll probably need to load it into an online map (OSM, MapQuest, Google Maps) to get directions. So a "open in map" button would be great (yes, I can copy/paste the address and !bang it, but it's not exactly a great experience)
- Sometimes I just want to search for images or videos. Yes, I can search "Images X" or "Videos X", but it's not nice. Also you get the minimized image/video box. I'd add two bangs, !i and !v (those right now alias to Google Images and Youtube, which have !gi and !yt anyway) to search for images/video and that will auto-open the images box.
- Auto-suggestions are neat, but please add an option to remove the "select-on-hover" behavior. It's really annoying to casually move the mouse and select something else.
That's mostly it, otherwise I'm really, really happy with DDG. Thanks, and I wonder what the future will reserve!
Thank you to everyone who provided feedback to us during our public beta period! Please keep the feedback coming so we can quickly iterate. We really do listen to it all.
The only thing that stands out to me as less useful than the equivalent Google search at this point is the hiearchy of the results. Google uses a link-like blue color for the titles of each result, which seems like a leftover from a past age of the web, but is actually useful for scan-ability because the text of the headers stands out.
Compare the current DuckDuckGo... https://i.cloudup.com/vrwZgUkOty.png
...to Google... https://i.cloudup.com/eFCFEE5TYG.png
...to an adjusted version of DuckDuckGo... https://i.cloudup.com/jluIYZWtzz.png
Having an extra color for the headings lets you scan the page much more easily, which lets you get to the result you wanted faster. The downside is that since their brand color is red, it feels "best" to have the highlight color red. But then that has some negative emotional connotations. Tried green as well, but it didn't stand on it's own enough since there's so little green on the page.
Anyways, I've switched to DDG as my default and will try it out for a while again. I also love those favicons that show up next to the domain names.
It came up with some interesting results. The images opened automatically for me (not sure why) and were a little off the mark. Ideally there would be a link to switch between Celsius and Fahrenheit, with maybe even a cookie to save your preference, although I don't know if that's very anti-DDG (does DDG store cookies for anything?).Yahoo "solves" this by having you go to weather.yahoo.ca to default to metric. At any rate, given that 95.5% of the world's population uses metric, it'd be a nice feature.
The other day, I was searching for a Django core developer's contact. I knew his exact name was Baptiste Mispelon so I searched that directly.
On Google  after his Twitter and Github accounts, the first picture is correct, and I did not have to do anything else, the contact infos are there, his picture is there, great.
On DuckDuckGo  the picture is not even close, and the first couple of results are not as useful as on Google .
I think it is a mistake to concentrate on clean design on a search engine until the searching algorithm is not that good. AFAIK Google's page ranking algorithm is well known, when I were in university I even heard stories that a student (going on the same class as me) reproduced the algorithms only on his own!
TL;DR: I want to search relevant information with a search engine, not to look some nice webpage.
 I have bug reported this. They have a very good feedback system on their website.
(Edit: How odd; a reload caused the page to be displayed differently, with the images below the text and icons.)
The fonts come from here:
* Someone looking to search immediately may be confused/frustrated as the text entry field is currently not visible until the slideshow ends.
* Consider relocating the "press" button away from bottom right; I almost missed it and only saw it because I'd been on the page for a few minutes, finished the slideshow and was looking for more.
* Also, when I saw that button, I thought it meant "press this to see something cool", so I was disappointed when it only took me to the company press page.
* I really like the background colour scheme on the front page but you might consider switching it off as it doesn't carry over to other pages. I.e I found the visual discontinuity a bit jarring when the search and press pages didn't reflect it; that's when I realized that the biggest message I got unconsciously was that my default DDG pages would now be in this colour (with ability to change it). I see now that the pages depicted on "inner" screen were the usual white, but I honestly didn't see/process that against the bolder background.
I noticed the change, and it didn't annoy me much (any change is a bit discombobulating), which is actually high praise. I haven't stumbled into any "woah, that's cool!" features yet (though I'm noticing a few things and nodding appreciatively).
Just checked the "what's new" and I'm pretty much liking.
I'd still love to see time-bounded search provided. That's one of the very few uses that will draw me back to Google for general Web search (Google's special collections: books, scholar, news, etc., may bring me in more often).
I've been using DDG off and on for a couple of years and solidly since last June. It's definitely working for me.
Specifically: the DDG results don't rank the arguably top-rated open source offic suite (LibreOffice) at the top of the results page, instead showing an order suspiciously similar to that of Bing. Google (both logged in and out) puts LibreOffice at the top of results, as does StartPage.
Some argue a bias against free software by DDG. I apply Hanlon's razor, but this is one example where improving results would be a bonus.
Screencaps of results:
Also searching for say chicago, IL does not show the maps tab. We need to search for Chicago IL for that. Not sure why the comma is throwing them off.
I can't seem to trigger it now. So I guess it's an improvement.
Smarter Answers Answers to your questions from the best sources, developed by our open source community.
It's improved fairly steadily in that time (as measured by how often I end up falling back to appending "!g" to my search), but this is the single biggest improvement I can remember in my time as a user.
Aside from the auto-complete (which is nice), it feels significantly faster, and it's also easier to parse visually.
I'm really excited about seeing DuckDuckGo evolve, and it seems more and more people are as well: https://duckduckgo.com/traffic.html
DDG is my search of choice and the pain induced yesterday is not enough to swap back to google but still, not happy at all :(
The contrast is way too low, it prefers vertical over horizontal (I, like any people, have a widescreen monitor. Displaying 3 search results by default is a little absurd), a couple other issues.
It feels like a mobile interface.
Oh, and there's no way to revert to the old version. The options merely change the color scheme, as far as I can tell.
Big improvement imo.
Also, setting the Header option to Off is the same as On With Scrolling. This is on ff29.
Other than that, I think I'm finally switching over to ddg.
Usually I had to add "github", "npm" or some other word that would narrow it down for DDG, while Google just knew what I wanted and/or already visited.
Maybe it's the lack of personalized search results or Google is just smarter. Either way non-personalization is a double-edged sword.
For example, all the <domain>.<something>stats.com sites that try to get traffic when people search for various brands, or this strange one: http://www.loginto.org/<domain>-login apparently it tries to steal login credentials, or I don't see the point).
I miss some of the simplicity of the old DDG but after adjusting the only thing i find missing is the StackOverflow integration. It may totally be there, i just haven't had the right query yet...
I also hate the way results have no apparent division between them, not even a prominent title; it makes them all blur together when I am scanning the page.
What saddens me though is that we (as in "the users") still don't have a strong guarantee on the respect of our privacy. We still have to trust the DDG team. I know there is no easy technology to do it, but still, the whole thing is only marginally better than using Google.
Can you tell me, the end user, what are other benefits of using DDG aside _privacy_ (given I am using chrome/incognito by default)?
I use search engines for a niche blog, and I have a need to keyword search certain specific terms which are not common words. I have consistently tested all the available search engines (there aren't many). And I have always arrived at the same conclusion: there is no better search engine out there then what Google maintains.
I am no blind Google lover, but when it comes to practicality of effective and useful products, you have to have the best, in order to make your case.
It works well with "orange" as in the example, but searching for "Apple" directly shows result for the company without displaying the "Meanings" panel. We can't see the fruits' search results using that term, which is quite disappointing.
It gets more puzzling when you search for "Apples" and are displayed with the meaning tab
try: https://duckduckgo.com/?q=orange vs https://duckduckgo.com/?q=apple
Edit: apart from that this redesign is very pleasant :)
Luckily, there is a "classic" mode. Please Gabriel, make classic mode the default mode again.
I'm loving it excellent work!
A question.Where do ddg guys get this massive taste for color red?
One of my favorite things about DDG is that I do not have to worry about "search bubbles." I don't have to worry that DDG is profiling me and de-prioritizing results it doesn't "think" I would want to see. I know Google thinks search bubbles are a feature but I think they're a bug. I don't want some algorithm trying to reinforce cognitive biases for me so I don't experience the shock of a dissenting opinion. I've observed a few times that DDG seems to do a better job finding really obscure things, and I've wondered if this might somehow be related to profiling algorithms or lack thereof.
I also find the level of data mining Google (and Facebook) engage in to be creepy, invasive, and to hold a high potential for abuse. I'm certainly open to alternatives whose business model does not revolve around that kind of intrusive personal profiling. I'm aware that DDG does have an ad-and-analytics business model, but they seem to be taking the high road with it.
Prediction: "privacy is dead" will in the future be regarded as an idea that greatly harmed several multi-billion-dollar companies. I think it's firmly in the realm of utter crackpot nonsense, and anyone who thinks this is either hopelessly naive or delusional about the political, social, and economic realities of the world. A full-blown user revolt is underway.
Hopefully the market share will be more evenly distributed among SEs. Let's do our part