But the significance of this breach is not the only thing that caught my eye.
These litigants have been entrenched in scorched-earth litigation for years now in which the working M.O. for both sides is to concede nothing and make everything the subject of endless dispute. Big firm litigators will often do this. It is a great way to rack up bills. Clients in these contexts do not oppose it and very often demand it. And so a lot of wasteful lawyering happens just because everyone understands that this is an all-out war.
To me, then, it seems that the big problem here (in addition to the improper disclosures of highly important confidential information in a public court hearing) was the resistance by the lawyers who did this to simply acknowledging that a big problem existed that required them to stipulate to getting the transcript sealed immediately. Had they done so, it seems the information would never have made the headlines. Instead (and I am sure because it had become the pattern in the case), they could not reach this simple agreement with the other lawyers to deal with the problem but had to find grounds to resist and fight over it.
I know that we as outside observers have limited information upon which to make an assessment here and so the only thing we can truly say from our perspective is "who knows". Yet, if the surface facts reflect the reality, then it is scarcely believable that the lawyers could have so lost perspective as to take this issue to the mat, resulting in such damage to a party. Assuming the facts are as they appear on the surface, this would be very serious misconduct and I can see why Judge Alsup is really mad that it happened.
A better title might be:
"Google is trying to get Oracle in trouble for revealing confidential figures"
And if a lawyer did break the law by doing it, I say she belongs on the same high pedestal people put Snowden on.
dr_dank summed it up best back in 2003 :
BeOS was demonstrated to me during my senior year of college. The guy giving the talk played upwards of two dozen mp3s, a dozen or so movie trailers, the GL teapot thing, etc. simultanously. None of the apps skipped a beat. Then, he pulled out the showstopper.
He yanked the plug on the box.
Within 20 seconds or so of restarting, the machine was chugging away with all of its media files in the place they were when they were halted, as if nothing had happened.
Huge Hacker Comments about Haiku OS today But so few Haiku's BE/OS clone quiet power, elegance just simplicity Plan 9 and Minux all have their followers will Haiku get love More than a small toy Time will be the decider Windows look out now
So I load 'er up on the ol' boot partion and...what? Music production? Just a lightweight, novel web surfing machine? Someone, anyone, give me a reason to spend my weekend farting around with OS installs.
Counter to that, should the answer be, "meh, it's just something novel to play with", then why are the devs pouring time into it? I guess I'm trying to politely say I kinda don't get the point. (But maybe a good answer to question #1 can help.)
BeOS was another company that Apple considered to purchase but they ended up buying NeXT instead. IIRC they ended up going with NeXT because BeOS didn't have networking back then.
Linux-based distributions stack up software -- the Linux kernel, the X Window System, and various DEs with disparate toolkits such as GTK+ and Qt -- that do not necessarily share the same guidelines and/or goals. This lack of consistency and overall vision manifests itself in increased complexity, insufficient integration, and inefficient solutions, making the use of your computer more complicated than it should actually be.
Instead, Haiku has a single focus on personal computing and is driven by a unified vision for the whole OS. That, we believe, enables Haiku to provide a leaner, cleaner and more efficient system capable of providing a better user experience that is simple and uniform throughout.
EDIT: I jumped the gun on this one. Should have done a bit googling first.
There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or  for Ubuntu or  for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.
 https://launchpad.net/~graysky/+archive/ubuntu/utils https://github.com/graysky2/profile-sync-daemon
EDIT: Add link, grammar
EDIT2: Add link to source
I hope we can get around to doing it someday. Of course, as usual in an open-source project, contributors welcome :)
Disclaimer: I dual boot (camp) windows 7 on my mac.
I feel it's little antisocial for regular desktop apps to assume it's for them to do this.
Chrome is also a culprit, a similar sync'ing caused us problems at my employer's, inflated pressure on an NFS server where /home directories are network mounts. Even where we already put the cache to a local disk.
At the bottom of these sorts of cases I have on more than one occasion found an SQLite database. I can see its benefit as a file format, but I don't think we need full database-like synchronisation on things like cookie updates; I would prefer to lose a few seconds (or minutes) of cookie updates on power loss than over-inflate the I/O requirements.
It also is about single CD speed (yes, you could almost record uncompressed stereo CD quality audio all day round for that amount of data)
All to give you back your session if your web browser crashes or is crashed.
Moore's law at its best.
I still think the worry about it wearing out an SSD is overblown. The 20GB per day of writes is extremely conservative and mostly there to avoid more pathological use cases. Like taking a consumer SSD and using it as the drive for some write heavy database load with 10x+ write amplification and when you wear demand a new one on warranty.
Backing up the session is still sequential writes so write amplification is minimal. After discovering the issue I did nothing and just left Firefox there wearing on my SSD. I'll still die of old age before Firefox can wear it out.
As I understand this feature is there so if the browser crashes it can restore your windows and tabs - I don't remember having a browser crash on me since the demise of Flash.
Because FF may die but the OS will save it later. That's fine
Not every write to a file means a write to disk
Just another firefox ssd optimization.
Edit: And see bernaerts.dyndns.org/linux/74-ubuntu/212-ubuntu-firefox-tweaks-ssd
It talks about sessionstore.
I actually use a tmpfs for a few things:
$ grep tmpfs /etc/fstab tmpfs/tmptmpfsnodev,nosuid,mode=1777,noatime0 0 tmpfs/var/tmp/portagetmpfsnoatime0 0 tmpfs/home/zx2c4/.cachetmpfsnoatime,nosuid,nodev,uid=1000,gid=1000,mode=07550 0
This is a far superior solution to fiddling with configuration options in each individual product to avoid wearing down your SSD with constant writes. Murphy's law has it such hacks will only be frustrated by next version upgrade.
And no, using Chrome does not help. All browsers that use disk caching or complex state on disk are fundamentally heavy on writes to an SSD. The amount of traffic itself is not even a particularly good measure of SSD wear, since writing a single kilobyte of data on an SSD can not be achieved on HW level without rewriting the whole page, which is generally several megabytes in size. So changing a single byte in a file is no less taxing than a huge 4 MB write.
Once I noticed that excessive writes were occurring, it was easy for me to identify FF as the culprit in Process Hacker but it took much longer to figure out why FF was doing it.
If it's genuinely receiving new data at this rate, that's kind of concerning for those of us on capped/metered mobile connections. The original article mentions that cookies accounted for the bulk of the writes, which is distressing.
If it's not, using incremental deltas is surely a no-brainer here?
Basically chattr +i on a whole bunch of its files and databases, and everything's fine again...
I just built a new PC with SSDs, and switched back to Firefox. Even with 16GB of RAM on an i3-2120, Firefox still hiccups and lags when I open new tabs or try to scroll.
This new issue of it prematurely wearing out my SSDs will just push me to Chrome. Hopefully it doesn't have the same issues.
Anyone got ideas on that?
Maybe moving this folder to a HDD should suffice.
Firefox really started to annoy me with its constant and needless updates a few months back; the tipping point being breaking almost all legacy extensions (in 46, I believe). This totally broke the Zend Debugger extension, the only way forward would be to totally change my development environment. I'm 38 and now, and apparently well beyond the days when the "new and shiny" hold value. These days I just want stability and reliability.
Firefox keeps charging forward and, as far as I can tell, has brought nothing to the table except new security issues and breaking that which once worked.
I haven't updated since 41 and you know what, it's nearly perfect. It's fast, does what I need it to do, and just plain old works.
Firefox appears to have become a perfect example of developing for the sake of.
Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.
Thanks for your hard work! Looking forward to taking this for a spin.
I don't think this is correct. glibc has architecture specific hand rolled (or unrolled if you will lol) assembly for x64 memchr. See here: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...
I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().
The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.
It might also be that mmap is a loss these days even for single-threaded... I don't know.
Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.
Some discussion over on /r/rust: https://www.reddit.com/r/rust/comments/544hnk/ripgrep_is_fas...
EDIT: The machine I'm on is much less beefy than the benchmark machines, which means that the speed difference is quite noticeable for me.
RUSTFLAGS="-C target-cpu=native" rustup run nightly cargo build --target x86_64-unknown-linux-musl --release --features simd-accel
Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.
I find this simple wrapper around grep(1) very fast and useful:
I'm convinced to give it a try.
Is it enabled when you specify a directory (rg somestring .) ?
Oh well. Waste of time then.
I think it's kind of odd to draw such a strong comparison to the Bitcoin blockchain. As the technical description  points out, the "chainpad" system discards most of the features and properties that make Bitcoin secure against malicious participants. That seems like a totally reasonable design decision for this application, but then describing it as a blockchain just adds confusion.
In fact, the design seems to bear a much closer resemblance to the Bayou optimistic concurrency algorithm , with operational transformation as the underlying data model, and some extra crypto on top.
Zero knowledge has a very specific meaning inside cryptography. Encrypting something does not make it "zero knowledge".
DISCLAIMER: I've spent way too much time on synchronized CodeMirror editing...
Proof of work is probably an acceptable solution for proof of concept but anonymous consensus isn't needed for for collaborative document editing.
I'm still thinking if this use cases needs timestamping or atomic broadcast.If timestamping is sufficient, Google's new roughtime protocol would do the job well. Otherwise you need a proper atomic broadcast algorithim like RAFT, Tendermint, Honeybadger etc.
How can I safely share the URL to someone without already using an established encrypted communication method?
Is the encryption key stored in my browser history?
Hong Kongers are sensitive about encroachment by mainland law enforcement. Last year, several Hong Kong booksellers disappeared after publishing thinly sourced, salacious tell-alls about Chinas leaders. They turned up later in detention in mainland China.
That doesn't of course, mean you couldn't make money by investing in Twitter. You can make money by investing in overvalued companies as long as you don't hold onto your share until it busts. One profitable route would be if Twitter does get bought by a larger company. The market as a whole will lose on Twitter, but local maxima can be more profitable than the whole.
But at a personal level, don't be naive about this. A lot of people are investing, not just money, but time and energy, in Twitter or startups like Twitter. If you find yourself thinking that Twitter is a company with any real value, you should take a step back and evaluate whether you're being wise, or whether you've fallen prey to the unbridled optimism of the tech bubble. Twitter's position as poster child for the tech bubble makes it a good litmus test for people's understanding of the industry, and I suspect it will correlate very strongly with who loses everything when the tech bubble collapses.
- Google learnt from its mistakes with Google+ and is eager to not repeat them
- The company is a very different one now from years ago
- Google doesn't want to mess up identity again, so that wouldn't be an issue
- Google mostly just wants a social graph
- Twitter is a bad public company that makes irrational decisions
- Merging Google engineering/leadership with Twitter might actually give direction and ease the financial pressure that seems to drive the company's poor engineering decisions
For example, let us just say, hypothetically, something really damaging comes out about FB (e.g. the news about the fake video view metrics) and advertisers start fleeing from it. Wouldn't Twitter be the beneficiary of at least some of that exodus? Do they really have no option of an end game?
A considerable buy in? A full acquisition?
And, assuming a full acquisition... what would be the gain?
Google has a bad story with attempts at social media, apart from YouTube. (Bought Orkut, killed it, tried Google Plus, went nowhere). Twitter is hard to make profitable without alienating the users with too many ads.
For Google, it would probably be an acquisition like YouTube. With the knowledge that it might never be profitable, but intended to get control over a significant asset. But sharing Google infrastructure and resources could probably bring down operating costs in the medium term.
I use twitter every day as my primary method of content discovery.
So at their core the BUSINESS should revolve around monetizing my eyeballs, eg advertising.
So to me it's Facebook or Google that should grab it, w/FB at the lead considering their relatively smooth / unhurried / and successful takeovers of whatsapp / instagram
The social graph is nice, but between Chrome and Gmail, Google already knows quite a bit about everyone.
"Twitter will be sold in six months - Kara Swisher"
Other Oracle acquisitions: Datastax, push.io, Collective Intellect, etc.
AMT has since emerged to devour the value of this benefit. By having to include the value of the spread (difference between exercise price and fair market value of the stock on date of exercise) as AMT income and pay tax on it at 28%-type rates, an employee can incur great tax risk in exercising options - especially for a venture that is in advanced rounds of funding but for which there is still no public market for trading of the shares. Even secondary markets for closely held stock are much restricted given the restrictions on transfer routinely written into the stock option documentation these days.
So why not just pass a law saying that the value of the spread is exempt from AMT? Of course, that would do exactly what is needed.
The problem is that AMT, which began in the late 60s as a "millionaire's tax", has since grown to be an integral part of how the federal government finances its affairs and is thus, in its perverse sort of way, a sacred cow untouchable without seriously disturbing the current political balance that is extant today.
And so this half-measure that helps a bit, not by eliminating the tax risk but only by deferring it and also for only some but not all potentially affected employees.
So, if you incur a several hundred thousand dollar tax hit because you choose to exercise your options under this measure, and then your venture goes bust for some reason, it appears you still will have to pay the tax down the road - thus, tax disasters are still possible with this measure. Of course, in optimum cases (and likely even in most cases), employees can benefit from this measure because they don't have to pay tax up front but only after enough time lapses by which they can realize the economic value of the stock.
This "tax breather" is a positive step and will make this helpful for a great many people. Not a complete answer but perhaps the best the politicians can do in today's political climate. It would be good if it passes.
Edit: text of the bill is here: https://www.congress.gov/bill/114th-congress/house-bill/5719... (Note: it is a deferral only - if the value evaporates, you still owe the tax).
I understand the desire to avoid a regressive taxation system, but why is it that every tax rule we create comes with 2x the amount of caveats and rules? Our tax system is becoming a mess.
At this rate soon nobody will be able to file their own taxes without an accountant to sort through the muck. And complicated to systems tend to benefit the wealthy.
"the Administration strongly opposes H.R. 5719 because it would increase the Federal deficit by $1 billion over the next ten years." 
So a really bad tax rule is in place, but since it happens to bring in ~$100M/yr, we shouldn't fix the rule?
It's quite common to owe taxes today for gains on the value of your stock -- which is an illiquid asset you can't sell. This puts employees in the position of shelling out cash to keep something that rightfully belongs to them, or simply abandoning it (failing to exercise) when they leave the company. This bill would defer taxes on gains up to 7 years, or until the company goes public.
If you are awarded stock options, an you exercise them, you have to file an 83(b) election within 90 days or else you are liable on all paper gains in the value of your stock.
Even if you file an 83b election, you are still liable for paper gains between the value of your options when you were granted them and the value when you exercised.
For example, if you were awarded options with a strike price of $5 and the company raised a new round of funding and the 409A valuation (& strike price of the new options) has risen to $15 per share, the IRS considers that you now owe taxes on $10 of income / share. In other words, it costs you not $5 / share to exercise but ~$8.50 including taxes.
So the tricky part about options is that they require money to exercise, money that you often don't have ready, in order to obtain an asset that is (a) not liquid and (b) may decline in value (c) you often can't sell due to transfer restrictions.
For example: one early engineer at Zenefits had to pay $100,000 in taxes for exercising his stock....and then all the crap hit the fan, and he likely paid more in taxes than his shares will end up being worth. Ouch.
As a result of this problem with options, many startups -- especially later-stage ones like Uber -- choose instead to offer RSUs, which are basically stock grants as opposed to stock options. You don't have to pay any money to "get" them like you do for options.
However, the IRS considers stock grants, unlike options, immediately taxable income. If you get 10,000 RSUs per year, and the stock is valued at $5/share by an auditor, you now have to pay taxes on $50,000 of additional income, for an asset that you likely have no way of selling.
Some startups allow "net" grants -- which basically means they keep ~35% of your stock in lieu of taxes. That solves the liquidity problem, but offering this is completely at the discretion of the startup and some don't, which leaves employees at the mercy of the IRS, again having to pay cash on paper gains of an illiquid asset.
That's the core issue: the IRS is taxing individuals on truly illiquid assets.
But if you never exercise the options, then you never owe any tax. What am I missing here?
--Of course I oversimplify the consumption tax, and safeguard would need to be in place on that to ensure it is not regressive with respect to necessities...
"Phantom stock can, but usually does not, pay dividends. When the grant is initially made or the phantom shares vest, there is no tax impact. When the payout is made, however, it is taxed as ordinary income to the grantee and is deductible to the employer."
>> the date that is 7 years after the first date the rights of the employee in such stock are transferable or are not subject to a substantial risk of forfeiture, whichever occurs earlier
Which implies that transfer-restricted stock grants do not start this clock ticking.
Is this why I keep seeing nominal $1 salaries?
While this amendment is short in length, it seems to add additional complexity to an already complex tax code. I would have liked to have seen an even simpler proposal.
Does this mean I don't owe AMT addition next year?
What does Palantir do?integrate[s] disparate data sets and conduct[s] rich, multifaceted analysis across the entire range of data.
How does NYC use it? Tax fraud, fire code violations, fake security guards, fake IDs, fake cigarettes, fake marijuana.
So the data already existed in NYC databases and the crimes they're enforcing already existed.
And yet:"the potential for that kind of outright abuse is less disturbing than the ways in which Palantirs tech is already being used. The citys embrace of Palantir, outside of law enforcement, has quietly ushered in an era of civil surveillance so ubiquitous as to be invisible." -- total hyperbole!
If anything the most telling part of this article to me, was the small sums of money being made by Palantir which is frequently lauded as one of the most elite, selective startups for software engineering positions. It seems to operate in small change relative to all the hype.
Take a look at the top 10 US government contractors. Most of the top 10 make weapons systems. But two are in information processing: Leidos (used to be SAIC), and L-3 Communications. Palantir isn't even in the top 100. Maybe they're more into state and local customers.
There's lots of potential for innovation in the state and local government space. A smartphone app for building inspectors, for example. One that involves lots of picture taking and GPS tagging. There are building inspector apps, but they're basically paper forms reworked for tablets.
An ambitious project would be a system which takes the video and audio from a cop's body cam and does most of the paperwork. Show it a driver's license or a face, and it's in the record and understood by the system. Cops hate paperwork, yet have to document much of what they do. Automate that and cops will be glad to wear a cam. Difficult and controversial, but useful.
It might be easier to sell in countries where local government is more standardized. In the US, you'd have to customize a system for every police department.
It is a continuous marvel that Peter Thiel, nominally an outspoken and prominent libertarian, is partially responsible for one of the most insidious powers that the U.S. government has over its people.
Committing resources to quality of life improvements? Good
75% of enforcement done in neighborhoods of "color"? Yikes
CIA-backed data analysis firm Palantir Technologies? Dear god
why anyone thought it was a great idea to name their company after the remote sensing device guaranteed to lie to you and make humans suicidally depressed has always been beyond me.
Presumably this technology is supposed to be helping the people of NYC. Shouldn't these people know what data is being collected about them so they can decide whether or not they actually want it?
Perhaps one of the hardest lessons to live by, but of immense value.
Thanks a lot for taking the time to write this down!
Vulcan (from ULA), New Glenn (from Blue Origin), and Falcon Heavy (from SpaceX) are all better platforms for space exploration, which could enable science (such as the Europa mission) and travel (to Mars), and cost far less than SLS (in development and $/kg to orbit). NASA should be spending money on missions, not rocket development.
> Expand the full use and life of the space station through 2024 while laying the foundation for use through 2028.
So does that mean ISS is not going to be abandoned by 2024?
The only way the US can get anything done in space exploration is if it can be fully funded and completed in less than 2 years. So a little probe here and there is completely doable, but a Space Shuttle replacement or manned Mars mission or any other big project is a complete no-go. It won't ever happen.
brew update brew upgrade npm -g upgrade for f in ~/projects/*; do cd $f npm update --save npm update --save-dev mix deps.update --all elm-packages update bundle update npm test mix test rake test done
On Linux I used the 'kerl' script to easily switch between installations: https://github.com/kerl/kerl
Seems like it works on OS X too: http://stratus3d.com/blog/2014/10/24/install-erlang-16-on-ma...
Edit: I mostly work with Elixir and have Erlang installed via Homebrew right now.
mutt -f imaps://imap.gmail.com
The one caveat that I should point out (because it's not mentioned in the article) is that you will probably never be fully rid of official Gmail clients. There is still no good mechanism to use some features with Mutt like thread muting, and these are essential to effective email these days. It's also often more convenient to read certain types of email (e.g. messages that are heavy in multimedia) from a client that supports graphics.
My usual habit is to read email in the web client or on mobile, and respond to or compose mail from within Mutt.
- regex search
- faster actions (like batch delete, mark as read) using tag
- can use my editor to compose. I use emacsclient -nw and it's so easy to copy things from shared buffer.
- very easy to customize, for example, I want to see the timestamp as local time regardless of the sender's timezone, I wrote a smile Go program to do that https://github.com/wujiang/localize_mutt; I also run a cronjob to archive old emails.
It works great. Very fast, and it's nice to have a local backup of my email.
How long it will last, who knows, sooner or later cross border sales of tobacco will be banned, but for the time being those who can are stocking up while they can. Cigars last for decades and peak in the 5-25 year range depending on the cigar so right now there's no downside to buying as many as you can, even the none special editions (cuban) are probably good for about 10% a year in appreciation.
Personally I'm buying a box a month.
Anyway TV Shows episode calendar is really useful and I think it's a nice idea to have this information arranged in this way.
I would suggest looking for less computing intensive way to make the backgrounds work like they do now.
Either way, it's not a huge deal since I seldom watch the site for longer than those few seconds, but I have clicked the alternative links on Google a few times just to see if they would function any better.
Good job anyway! Looks really good.
Or find a niche like mountain bike, and list all the mountainbiking events
days.to/until/summer showed the days until summer ended. It was great!
Simple, to the point, beautiful.
* Is it necessary to go to a url when one clicks on an event? The amount of information displayed is tiny (date, event title, time until event) and it's a waste of screen space and time (now I have to go back to look at other things). It would be cool to show the information in a modal box and then continue to browse what's coming up.
* Okay, now I'm on days.to and I know that an event will take place in two months. Then what? I leave the site and in a few days, I forget about it. I think I stumbled upon an email feature but I couldn't find it again. It also has a calendar. Why? Would it be better to build on something a large number of people are already using and trusting to manage their daily lives? Something like Google Calendar or Facebook Events. Maybe using their API to insert an event into the already existing calendar. Even if I leave days.to, I can still see the positive it brought to my life and I'm more likely to come back.
* Maybe topics. I push in some interests. With enough users, it might start detecting certain patterns and starts showing me upcoming events resulting from the interests of people who share some of my interests. If I like music and painting, and you like music and theatre, it might show me theatre events and show you painting events.
iTunes is done by a different team than the OS. At one point at least much of the iTunes web side was handled by remote contractors, not sure of the app itself. Given that Apple is releasing 4 new OSs every year its not surprising something gets screwed up.
It will be fixed within a week I bet.
Is this Apple's Bitlocker Elephant Diffuser?
Can I have some extrabacon with that?!
1. wipe his previous ios10 backups
2. if the backup password is not significantly long, increase the length of his backup password with some random enough material.
And, of course, never forget that "$5 wrench" comics.
I still hope Apple will publicly respond on this. It simply doesn't fit with the other steps they did at least starting with iPhone 5s.
Perhaps not a full back door, but more of an open upstairs window?
Given the other crap, relatively speaking they still shine though - at least a patch will be out soon to everyone that turns on their device.
 FCC should really look into making security updates for mobile devices mandatory with a time limit, in absence of which OEM or the Carrier must replace the device free of charge with one that doesn't have the vulnerability. It's criminal what OEMs and carriers are getting away with while making ton of profits.
I'm guessing the reason this article was posted, and the feature was added (in 2013 mind you) was due to the malicious way DNS servers have been abused in the last decade, and the recent mentions by bruce schneier around the attacks on global dns infrastructure (perhaps they leverage absuing recursive queries or something? i dont know). It's sort of like BCP38, good net citizens should be doing this, not for their own networks protection, but for everyone else.
Ill defined research problems, vague statements, poor methodology, many grammatical mistakes... given the nature of peer review, it's only natural that people who author nonsensical papers would nod at nonsensical reviews.
For people saying that this is because academia is an old boys network: not quite so. While it can definitely be like that when you get to the top, the vast majority of peer reviewers for most conferences are just grad students, post docs, or junior researchers who don't really discriminate by trying to guess who wrote the paper.
Nevertheless, while bad reviews do make it through, I do think the editors are able to recognise them for what they are.
"Our work has shown that this model of speciation does hold. But in addition, we have shown there are other routes to speciation, such as gene flow from one species to another. We see this in the Big Bird lineage but also in cichlid fishes and butterflies. There are multiple routes to speciation."
It's one thing to have a hypothesis, another to spend the decades it takes to modify it with observations.
-- How do you rank yourself among writers (living) and of the immediate past? -- I often think there should exist a special typographical sign for a smile -- some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.
I think the Internet fundamentally changed when that happened.
Tangentially-related, I can't fathom why someone would post YouTube videos of `telnet towel.blinkenlights.nl`.
Here's the verse:
Tumble me down, and I will sit
Upon my ruines (smiling yet :)
I think that the article does a fairly convincing job of showing that this is just weird 17th century typography, but then again, there was enough experimentation with printing at the time that it also wouldn't surprise me if it was intentional, at least at some point in the typesetting process.
Me: What's with all the :-) in the posts?
Friend: It indicates joking.
Friend: What's it look like?
Me: A pinball plunger.
Friend: Rotate 90 degrees.
It is funny to imagine how emoticons (https://en.wikipedia.org/wiki/List_of_emoticons) would look today if one of alternative symbols was accepted?
For years I have been searching for a copy of Blue Board (https://en.wikipedia.org/wiki/Blue_Board_(software)), a popular BBS program in the Vancouver, BC, Canada area written by the late Martin Sikes http://www.penmachine.com/martinsikes/
I even talked with the owner of Sota Software, the publisher, but I never heard anything back.
If anyone has a copy, PLEASE let me know! I've been wanting to setup a memorial telnet Blue Board site for decades now.
"Since Scott's original proposal, many further symbols have beenproposed here:
(:-) for messages dealing with bicycle helmets@= for messages dealing with nuclear war"
I'm glad we are past that.
Now adays, if a thread came about to propose the ':-)', people would devolve into a debate about the proper use of the parenthesis, and at least one user would claim that '(-:' was a better choice, though it is the darkhorse option for the community.
Well, I learned something today.
Have anyone thought about creating a separate HN for jokes?
(@> <@) ( _) (_ ) /\ /\
I see you
Pasted in example stolen from Glitchr, mainly to see how well HN renders them:
I wonder what a 300 meter dive "costs" in terms of energy for such a massive animal, and how amazing it is that they can manage 12 minutes on one breath.
Seems like HN has a very warped view, this view that "there's no money in research" means some poor kid is better off flipping burgers into her 50s than becoming a lab manager.
No! It is social and gender issue! Lets push even more students into this career path.
People with little family money get free grants and low-interest government backed loans to attend college. She is attending a public university. In NYC, there are many students that are attending City University of New York and I meet some. They are studying biology, engineering, and other sciences. They generally but not always live at home.
The annual tuition and fees are less than $7000 and for transportation, the MTA subway/bus is $115 per month, unlimited rides.
Students like those mentioned in the article get Pell Grants and Stafford government backed low-interest loans.Pell Grants are $5,800 per year.Stafford Loans are 5,500 the first year, 6,500 the second year, and $7,500 for the remaining years.
In my particular case, I paid for 90% of tuition/housing/living expenses by programming computers beginning in high school. I was not eligible for Pell Grants nor any form of loans including Stafford Loans.
So, I really don't understand these arguments. Public universities provide a first class low-cost undergraduate education and of course have PhD programs and so on.
Once one has an undergraduate degree with good grades, in the sciences and engineering, if they are admitted to a PhD program so they are fully funded for both tuition and housing.
CUNY Tuition and Fees:http://www2.cuny.edu/financial-aid/tuition-and-college-costs...
My own kids and their friends often express that science is interesting but they won't get into it because of that.
A big part of science are 'ideas', and ideas are interesting things in Human culture. To be the 'idea person' in a social group requires considerable social status. I see so many people in corporations battling to have their ideas win. I see so many people of higher status claiming ownership of the ideas of those 'beneath' them. I see plenty of great ideas being ignored because of who proposes them. And, it's very rare to see an outsider's idea gain influence.
They say that execution matters much more than ideas - but they go hand-in-hand. The person who gets to execute also gets to choose the idea.
Given the comparitive physical weakness of the Human, 'the idea' is their number one weapon and asset. It enables power. So, there are probably a lot of social reasons why most lower-status (lower income) people are kept out of science and research. It's probably more of a systematic result of Human behavior than just being poor.
This is an amazing breakthrough though. Stabilizing the SOD1 could potentially pave the way for preventing ALS in its early stages. Would this reversal of protein clumping help patients who have been exposed to pesticides or had head injuries that lead into ALS?
Also, Please donate to ALS research if you can. 
Sexual intercourse began In nineteen sixty-three (which was rather late for me) - Between the end of the Chatterley ban And the Beatles' first LP.
It doesn't answer the fun part, though, because Larkin was known to have a tedious love life.
Android searches for access points even when wifi is turned off. If anyone (with GPS enabled) uses that wifi with any google services the bssid will end up in their database. Also if the google car has been nearby it has recorded the presence of the wifi access point at that location .
Before you freak out: Apple and Microsoft also use access point information for positioning, although not as successfully.
This guy said he does this in a bunch of cities, driving around the geographic area of the USA where I work in. Very interesting to learn about.
Edit: I am not located anywhere near where Google has an office, so for him to stop by was interesting by itself.
Edit 2: grammar.
If you want "privacy" turn off the location services, or your phone, if you want privacy don't take your phone with you.
That said this isn't some "conspiracy" Google actually states when you enable background location services that this will be on all the time even when GPS and the wireless network are explicitly disabled, IIRC even in airplane mode the location background service can be operational without violating FCC regulations.
> Something you didn't mention: when a Google-car goes around taking pictures for StreetView it also maps the location and all wifi networks name. So taking a new router with a new network name from a different ISP might work, but only until they come near your house to update their pictures...> - Bakuriu
I am not sure how this makes me feel :-|
It should be possible to reconstruct google's BSSID database, right?