hacker news with inline top comments    .. more ..    15 Dec 2015 News
home   ask   best   3 years ago   
Move Fast and Fix Things githubengineering.com
194 points by samlambert  2 hours ago   43 comments top 17
1
jerf 49 minutes ago 2 replies      
I'll highlight something I've learned in both succeeding and failing at this metric: When rewriting something, you should generally strive for a drop-in replacement that does the same thing, in some cases, even matching bug-for-bug, or, as in the article, taking a very close look at the new vs. the old bugs.

It's tempting to throw away the old thing and write a brand new bright shiny thing with a new API and a new data models and generally NEW ALL THE THINGS!, but that is a high-risk approach that is usually without correspondingly high payoffs. The closer you can get to drop-in replacement, the happier you will be. You can then separate the risks of deployment vs. the new shiny features/bug fixes you want to deploy, and since risks tend to multiply rather than add, anything you can do to cut risks into two halves is still almost always a big win even if the "total risk" is still in some sense the same.

Took me a lot of years to learn this. (Currently paying for the fact that I just sorta failed to do a correct drop-in replacement because I was drop-in replacing a system with no test coverage, official semantics, or even necessarily agreement by all consumers what it was and how it works, let alone how it should work.)

2
cantlin 25 minutes ago 0 replies      
The strategy of proxying real usage to a second code path is incredibly effective. For months before the relaunch of theguardian.com, we ran traffic to the old site against the new stack to understand how it could be expected to perform in the real world. Later of course we moved real users, as incrementally as we possibly could.

The hardest risk to mitigate is that users just won't like your new thing. But taking bugs and performance bottlenecks out of the picture ahead of time certainly ups your chances.

3
mwcampbell 53 minutes ago 2 replies      
This is tangential, but given the increasing functionality and maturity of libgit2, I wonder if it would yet be feasible to replace the Git command-line program with a new one based on libgit2, and written to be as portable as libgit2. Then there would be just one Git implementation, across the command line, GUIs, and web-based services like GitHub. Also, the new CLI could run natively on Windows, without MSYS.
4
jcchee88 3 minutes ago 0 replies      
When running with Scientist enabled, doesn't that mean you will add both the runtime of the old/new implementation instead of just one implementation?

I could see this begin ok in most cases where speed is not a concern, but I wonder what we can do if we do care about speed?

5
_yosefk 9 minutes ago 0 replies      
TIL that github used to merge files differently than git because it used its own merge implementation based on git's code, to make it work on bare repos. Showcases a benefit of open formats and open source, showcases a downside as well (I'd never guess it might merge differently.)

It's a good thing nobody contributes to my github repos since noone had the chance to run into the issue...

6
eric_h 30 minutes ago 0 replies      
> Finally, we removed the old implementation which frankly is the most gratifying part of this whole process.

On average, I get much more satisfaction from removing code than I do from adding new code. Admittedly, on occasion I'm very satisfied with new code, but on average, it's the removing that wins my heart.

7
smg 1 hour ago 5 replies      
I am trying to understand why the new merge method needed to be tested online via experiment. Both correctness and performance of the new merge method could have been tested offline working with snapshots (backups) of repos. Could a github engineer shed more light here?
8
daveguy 1 hour ago 0 replies      
Very cool. I like this parallel execution of the original version and the update with comparisons between the two. They use a ruby package developed in house that has been made open source, Scientist. Does anyone know if there is an similar type package for python (preferably 2.7) development? It seems like an interesting area in between unit tests and A/B tests.
9
clebio 51 minutes ago 0 replies      
Seems like the biggest takeaway is "have good tooling and instrumentation". I'm working with a complicated legacy production system, trying to rebuild pieces of it, and we have little or no instrumentation. Even _introducing_ such tooling is a potentially breaking change to production systems. Ach schade.
10
nod 1 hour ago 1 reply      
This is inspiring reading. One may not actually need the ability to deploy 60 times a day in order to refactor and experiment this effectively, but it's clearly a culture that will keep velocity high for the long-term.
11
danielsamuels 23 minutes ago 1 reply      
I wish they would add the ability to fast-forward merge from pull requests. I know many large projects (including Django) accept pull requests but don't merge them on Github simply because of the mess it makes of the history.
12
netghost 16 minutes ago 0 replies      
For operations that don't have any side effects, I can definitely see how you could use the Science library.

I'm curious though if there are any strategies folks use for experiments that do have side effects like updating a database or modifying files on disk.

13
__jal 58 minutes ago 0 replies      
Nothing really to contribute or ask, other than to say that I really enjoyed the writeup. Although I have nothing coming up that would use the code, the new library sounds really neat. Kudos!
14
abritishguy 1 hour ago 2 replies      
Wow, strange that people weren't reporting these merge issues when they were clearly impacting people.
15
dlib 38 minutes ago 0 replies      
Very interesting, definitely gonna try this out as I have seen similar use-cases.

Any change Github is at anytime going to show the specific merge-conflicts for a PR that cannot be merged?

16
blt 50 minutes ago 0 replies      
Github sounds like a great place to work.
17
logicallee 45 minutes ago 1 reply      
This is kind of tangential, but I hate the word "technical debt", it's as financially illiterate as the word "random" applied to a deterministic process (hence the term pseudorandom). We need to come up up with a better word for this, like, yesterday.

Let's do this right now.

--> What term can we use instead of "technical debt" that is financially correct and also captures the emotional and analogy part of it?

(this is at -2, 8 minutes after I submitted it - boy some people really hate my question! can these people kindly think my objection through :-D. thanks.)

Angular 2 Beta released angularjs.blogspot.com
62 points by javajoshw  50 minutes ago   6 comments top 3
1
thoughtpalette 20 minutes ago 0 replies      
Excited to finally start playing around with this.
2
haxa 11 minutes ago 0 replies      
Anyone has the experience working with NativeScript and Angular 2? How's it compared with React Native? And is there any chance this will evolve to a viable alternative to developing native apps using web technology?
3
revelation 17 minutes ago 3 replies      
That's Google for you, having to present your exciting MVW toolkit on the crash accident of a website that is Blogspot.

Right now, there is a massive cookie consent form blocking my view of the actual article.

Scientists may have solved a mystery about sea-level rise washingtonpost.com
30 points by Mz  1 hour ago   8 comments top 4
1
zymhan 26 minutes ago 3 replies      
So the mystery was because the original paper had flaws in it's methodology? That's anticlimactic.
2
jdalgetty 2 minutes ago 1 reply      
so basically we're all going to die.
3
hanniabu 40 minutes ago 0 replies      
The comments in the article from the climate change skeptics hurt to read.....

"Are you suggesting that the oscillating ice ages are caused by a slowing and speeding up at the earth's core?"

4
viggity 1 hour ago 0 replies      
I thought this may be regarding the effect of ground water extraction causing coastal cities to sink 10+ times faster than sea level rise.

http://meetingorganizer.copernicus.org/EGU2014/EGU2014-14606...

Still interesting though.

Graph Isomorphism Algorithm Breaks 30-Year Impasse quantamagazine.org
94 points by kercker  3 hours ago   7 comments top 4
1
gre 1 hour ago 0 replies      
Graph Isomorphism in Quasipolynomial Time: http://arxiv.org/abs/1512.03547
2
nine_k 1 hour ago 1 reply      
Previously posted and discussed extensively:

https://hn.algolia.com/?query=Graph%20Isomorphism&sort=byPop...

3
Zach_the_Lizard 11 minutes ago 1 reply      
How long until we start seeing this algorithm in Google interviews?
4
jgn 1 hour ago 1 reply      
Thanks for posting this. In the future, could you please add the algorithm to the title? This feels a couple steps removed from clickbait. No offense intended; it's just a suggestion.
The Jacobs Ladder of Coding medium.com
17 points by franzb  1 hour ago   discuss
Where do all the bytes come from? medium.com
42 points by EddieRingle  3 hours ago   6 comments top 3
1
larrik 1 hour ago 1 reply      
I don't know, it's a bit like taking a screenshot of a text file, and wondering why the screenshot is 64k and the text file is only 500 bytes (or whatever).
2
joosters 1 hour ago 2 replies      
Minor nitpick, but it screws up all the calculations:

The original NES console was only designed to output images that were 256 wide by 240 high; meaning that the final image that needed to be displayed to the screen was 180kb in size.

The NES definitely didn't have 24-bit colour, so the final image data was at most 60kb, assuming 256 colours, or 30kb assuming 16 colours and a palette.

I don't know for sure what colour settings the NES had, I doubt it had a freely selectable 256 colours for each pixel. Probably a limited palette, maybe per-sprite, maybe for the whole screen.

3
fbbbbb 1 hour ago 0 replies      
I was mildly surprised when John Carmack retweeted the original tweet. https://twitter.com/smashingmag/status/675624576630571009

The state of the image (jpeg artifacts), was a dead giveaway that the comparison is worthless.

Portable Offline Open Street Map spatialdev.com
131 points by jharpster  6 hours ago   32 comments top 6
1
just_testing 5 hours ago 1 reply      
Let me explain which is the innovation of it:

There are offline OpenStreetMap clients, but there aren't ways to update said map offilne, or to create "mini-OSM" that later can sync with the main one.

For instance, if you're doing a survey in Amazon with a local community, you would need to make a survey, go back somewhere with internet, sync the data with OSM, download the new file and go back to the local community.

The innovation those guys are making is to create a mini-OSM, so the village could have its own mini-OSM, and later that mini-OSM could be synced to the main one.

They are not the only ones trying to do that, an NGO called Digital Democracy is also trying (https://digital-democracy.org)

2
legulere 18 minutes ago 4 replies      
I wonder if smartphone capacities grew we would one day hit a point where most meaningful data is already preloaded on every device: A full copy of Wikidata, Wikipedia, OpenStreetMap, dictionaries.

The trend goes the complete opposite direction: the devices get faster but we only use it to draw the data from our servers faster. We push all our data into the cloud, although our devices share a private network most of the time.

3
brudgers 2 hours ago 3 replies      
The other night I was thinking about the potential for "mapocalypse" -- navigation in a future where paper maps are rare because of services like Google maps and where the network is unavailable indefinitely. Even dedicated navigation devices are forgoing stored maps for connected services.

Over the long term, widespread access to offline maps feels like a critical plan B. I also suspect that we're just at the beginning of a map industry not in the mature commodity phase.

YMMV.

4
rmc 5 hours ago 0 replies      
It sounds interesting. But it will be interesting to see how they approach "syncing" the two 'datasets' (they're own, locally modified data, and the main OSM dataset). The current approach with the main OSM editors is to make the mapper manually figure it out.
5
replax 5 hours ago 1 reply      
one thing which would really be great is an OSM client for Kindle/ebook readers with an offline data store. With an optimised interface and map styling it could come in handy in many cases (hiking, bike tours - anything where weight, power and portability are a concern).

Obviously an editor like spatialdev are developing would need more advanced features thus they are targeting android.

6
anc84 5 hours ago 3 replies      
I don't understand what this is and where the "Portable Offline Open Street Map" part is described.

There are already tons of applications for offline OSM usage. There are even (sadly) many different vector formats and files available for download. It would be nice to focus improvement of those instead of developing yet another competing standard. I want to be able to use multiple apps with the same data, not having to provide each app with its own format (looking at you, OSMAnd, Oruxmaps, maps.me...).

CSS3 proven to be Turing complete? my-codeworks.com
62 points by mmastrac  4 hours ago   47 comments top 8
1
Dylan16807 2 hours ago 4 replies      
1. Feeding infinite HTML is not the same as feeding tape. A million tape cells can keep a typical Turing machine running indefinitely. A million rows only keep this running for a million execution steps, which is what, a tenth of a second if optimized?

2. The 'crank' here is not part of CSS. Computer languages that are declared 'Turing complete' need to be able to crank themselves. You need to be able to tell them to go, and wait. I accept that magic the gathering Turing machine, (at least as long as you remove the word 'may'), because it's part of the MtG rule set that you continue performing all the state transitions until you reach a halt.

3. Allowing this completely external pump means that anything that can add and multiply three numbers and then exit would be counted as Turing complete, because you can then instantiate an infinite number of these and pump data through. The Turing complete nature of that construction lies mostly in that pump. It is not at all just a crank that say 'go'.

And 3 is really the important part here. None of the scary implications of 'Turing complete' come into play, because you can't take the result of one arithmetic statement and feed it into more. All of that playing around is roughly O(n) in terms of page size. Not O(unlimited) as 'Turing complete' might imply.

2
vectorjohn 1 hour ago 0 replies      
I'm skeptical a little, in that the article starts off by saying it is "more" Turing complete than C. Something either is or is not Turing complete. The nitpick that a C implementation has a pointer size which limits the memory to !infinite is an unimportant implementation detail. If that's your argument, nothing is Turing complete because the universe is made out of a finite amount of matter and energy.

Also, as others have pointed out, it doesn't "run" unless an external thing is pressing buttons. If you allow what are essentially external programs to run, you might as well have Javascript doing the job, and then the headline becomes a lot less interesting.

3
murbard2 3 hours ago 2 replies      
Two comments:

1) it's not surprising: a lot of very simple systems can compute. It seems to naturally happen as you add flexibility to your rules. See for instance Wang Tiles or some cellular automata.

2) it's not a good thing: it means that the behavior of CSS3 is undecidable in general, which makes it much harder to build tools that can meaningfully analyze it.

4
dheera 2 hours ago 3 replies      
Great, so now I can write an x86 emulator and run Windows 95 in CSS, but I still can't figure out how to reliably vertically-center an element.
5
fixermark 1 hour ago 1 reply      
I think there's a flaw in the model.

"Assume that the amount of HTML currently loaded is finite but sufficient for all of the state to be properly rendered."

So consider<html> <div class="myContainer"> <div>1</div> <div>2</div> <div>3</div> . . .

I've never met an HTML streaming solution that won't just stream this sequentially, which means no </div> for myContainer will ever be emitted (and therefore the HTML will never be well-formed, and the CSS will never have sufficient information to lay out).

If the HTML were streamed such that <html><div class="myContainer"></div></html> were received, and then the interior of myContainer were streamed, that'd be a different story, but that doesn't exist. So I don't think this blog post's argument that CSS3 is Turing complete works for any real implementation.

6
haberman 2 hours ago 1 reply      
So CSS3 is both Turing complete and so low-level that most projects of significant size use a preprocessor like LESS, SASS, etc to write their CSS3 for them.
7
neocraftster 1 hour ago 1 reply      
Does this mean that in addition to a javascript blocker i need a CSS blocker to stop malicious code from being run in my browser?
8
forrestthewoods 2 hours ago 2 replies      
That's horrifying. And I'd argue a damning indicator that CSS3 is wildly overcomplicated. It really shouldn't be Turing complete.
Fighting Human Trafficking: Open Data, Big Data, and Python wiredcraft.com
53 points by ahaque  4 hours ago   4 comments top
1
stopthelies 2 hours ago 3 replies      
Fun. I'm doing my PhD thesis on human trafficking at DePaul University (in Chicago, Illinois).

http://www.villagevoice.com/news/real-men-get-their-facts-st...

I'm actually a little concerned about the concept of "human trafficking" since the stats being calculated tend to omit realities.

One big reality people ignore? People who pay and attempt to cross the border from Mexico to US? Those are counted as people being human trafficked.

>99% the immigrants that arrived illegally? Human trafficked, paper after paper when you read into them. They even count cases where they came alone, but had monetary help from a friend or family member.

So it's ironic. About my thesis - it points out the contradiction of people who push for the rights of illegal immigrants who propose fighting human trafficking, are really advocating the opposite position.

By extension, not mentioned in paper, I wanted to show for the longest time how people abuse "human trafficking" out of sheer political opportunism.

I'll gladly post the paper on my blog when it's checked by our department.

Regression to the mean is the main reason ineffective treatments appear to work dcscience.net
41 points by Amorymeltzer  3 hours ago   7 comments top 3
1
fiatmoney 1 hour ago 1 reply      
Also known as the Drill Sergeant Paradox.

Imagine that shouting has no actual effect on performance, but it is traditional to shout at underlings when they do something particularly poorly. When your trainees screw up, you berate them - and afterwards they actually do tend to do better. Unfortunately, this is because the screwup is more often than not a random variation, and the improvement is due to the mean regression, not the treatment. Conversely, praising them when they do well (again, assuming no underlying effect) actually seems to worsen their performance.

2
bigchewy 52 minutes ago 0 replies      
This is, unfortunately, rampant in healthcare. The natural variation amongst people as well as the natural variation of our health throughout out life makes actually analyzing healthcare outcomes incredibly difficult.

90%+ of published outcomes can be invalidated by simply looking at the published data. If you want some chuckles, read blog posts by Al Lewis ripping on research publised by companies touting their own performance. He's acerbic and condescending but also, mostly, correct.

3
ashearer 1 hour ago 1 reply      
So the research that established the placebo effectan effect thats well-known, and widely regarded as an illustration of the importance of control groups in researchitself had no control group? Thats incredible.
Software from Disney Research Seamlessly Blends Faces from Different Takes disneyresearch.com
25 points by protomyth  2 hours ago   6 comments top 6
1
6stringmerc 4 minutes ago 0 replies      
Faces might be blending but the audio isn't exactly something I enjoy, pretty wild that's how it sounds. Could surely pick a favorite audio track, or, a la Max Martin, comp all the takes for material knowing the digital warp will be passable. Slick technique, I'm definitely impressed with the the show of tech and capability.
2
boxy310 2 minutes ago 0 replies      
The mad + sad takes seemed to give a lot more emotional nuance than either of them separate. I like this, but hope it doesn't become as over-abused as autotune is.
3
stcredzero 13 minutes ago 0 replies      
Given the history of cinema, I predict that this will be used excessively by some directors, who will fall into an "uncanny valley" the perceptive and intelligent will find disturbing while others will find it to be a "super-stimulus."

Then the industry will mature, and this will be used with greater subtlety.

4
agumonkey 53 minutes ago 0 replies      
Odd, not long ago French movie theaters were shown this https://www.youtube.com/watch?v=AhYynZMwzYs

a blend of hand made compositing and simple warping. Technology is catching up too fast.

5
blackhaz 30 minutes ago 0 replies      
This reminds me of the automatic pitch correction (auto-tuning) in the recent pop music. Hopefully this will not signify the dusk of the art of acting.
6
dang 1 hour ago 0 replies      
Building solar farms above the clouds cnrs.fr
20 points by rmason  2 hours ago   4 comments top 2
1
Animats 1 minute ago 0 replies      
Solar in space is only about 2x more effective per unit area of solar panel than solar on the ground, after you deduct transmission losses. The costs, of course, are much higher. The enthusiasm for hydrogen as energy storage is misplaced. Electricity to hydrogen to electricity is maybe 40% efficient. Lithium batteries are 80% - 90% round trip.

In any place that needs air conditioning, solar power is very effective. Peak load and peak solar panel output line up nicely, and little storage is needed. Keep it simple.

2
davnils 44 minutes ago 1 reply      
Isn't it easier to modify the weather in an area populated with PV?
German court rules in favor of the Wikimedia Foundation wikimedia.org
68 points by edward  4 hours ago   14 comments top 3
1
ucaetano 3 hours ago 1 reply      
Good to see a balancing decision like this coming from Europe in general, and Germany in particular.
2
avar 1 hour ago 2 replies      
What if they had won? What bearing does suing the Wikimedia Foundation, a US entity, for material published in the US under US publishing laws?

The German Wikipedia is published and maintained by a US entity, there's a local German Wikimedia chapter but it's not the publisher of the German Wikipedia. The article says that they sued the US entity (the Wikimedia Foundation Inc., not Wikimedia Deutschland), but doesn't explain this issue.

3
franciscop 2 hours ago 2 replies      
I did not see that comming. Mainly from Wikimedia, since the tone is like they've won a war against Evil Corp for freedom when it looks like they just screw with this person's life. Yes, wikimedia might have the legal rights to do this, but as the awesome community and organization they are this feels completely odd.
Firefox 43 released with 64-bit version for Windows, better Private Browsing mozilla.org
8 points by ingve  23 minutes ago   discuss
Can we use Jenkins for that? simondata.com
39 points by brensudol  3 hours ago   28 comments top 13
1
gkop 2 hours ago 1 reply      
Eh, I'm sure there are many things Jenkins is OK at, but it's not that great of build server.

The Multi-Configuration Project abstraction (IE build matrices) is clunky and the plugin ecosystem doesn't respect it well (eg. the Gerrit plugin is extremely popular but very brittle here). So you wind up with O(n) projects anyway and still needing to copy and paste configuration among them.

Also Jenkins configuration itself is pretty nuts - settings splattered all over the web UI, backed by XML - compared to the simplicity of modern tools like Travis (which uses YAML).

And Jenkins' UI I would definitely categorize as typically-poor open source UI, having evolved and grown more complex over many years with no strong guiding vision.

2
thebeardisred 52 minutes ago 0 replies      
While not the best answer, I've used Jenkins extensively in the past for some rather creative purposes. Besides the standard "watch a SCM repo and react" type functions I've used it in combination with libvirt to spawn VMs self service for users (as well as coordinate the accounting and access control around this), watch an etag for a file to change in object storage and react, etc. I agree with some of the other commentary in that it's most useful aspects are around not using it with a traditional CI pipeline.
3
lmm 1 hour ago 0 replies      
This is part of the problem I have with Jenkins. It does far too many things. In many organizations it seems to occupy an awkward position between dev and production, where it's used for both and so ends up as the most fragile part of a production pipeline. Its config is very awkward, and partly as a result of that I've never seen it properly backed up or even a staging instance for testing changes, yet alone blue/green instances.

So I prefer to have clear segregation. Jenkins as a build tool only. Rundeck for deployment. If I had a big need for scheduling, I'd want a dedicated system for doing that too.

4
suprgeek 49 minutes ago 0 replies      
The flexibility of Jenkins continually impresses me. Given :

1) Simple UI (for simple usecases)2) Easy setup on single node or multi-node scenarios3)Automation capabilities

Unfortunately there are some gaps that make it just enough of a pain to really take up in a Production 24x7 env.

1) In a distributed setup, there are very minimal node management capabilities unless you manually integrate with say Zookeeper or something.

2) the plugins for backing up and restoring configurations are "lacking" to put it politely

3) Very hard to change the Master machine in a master slave set-up

4)etc etc

So while Jenkins is a like the Swiss army knife for CI be careful that you don't take it to a (multi-node production) Gunfight - to stretch the analogy.

5
forgottenpass 1 hour ago 0 replies      
This is actually my problem with Jenkins. It's great for running arbitrary jobs when triggered to do so. But it's super frustrating for managing many permutations of of the same class of jobs, with component tasks, permutations of different input, and orderly storage/display of output categorized on input. Or in other words: it sucks at building and testing software.

I'm almost to the point of using it just to manage generic "do X sort of stuff" tasks across many nodes, but the jobs that rely on an external system to run parameterized builds, and then the job then stores the results into the external system.

6
sheraz 1 hour ago 1 reply      
Cool ideas here even though a lot of comments are pouring water on it.

Reminds me of this post by Ted dzubia where he uses makefiles for data processing [1].

I like reading about novel uses of tools other than their original intent.

1- http://widgetsandshit.com/teddziuba/2011/02/stupid-unix-tric...

7
rrdharan 36 minutes ago 0 replies      
At my previous employer I used Jenkins with build matrices quite heavily. Lately I've been using TeamCity and we're paying for the enterprise variant. TeamCity has its share of flaws but FWIW it seems to handle more complex workflow tasks better.

In particular I like the flexibility around snapshot versus artifact dependencies, the APIs are decent (and you can do a lot of troublingly clever things if you invoke the API from within a build), and the metarunner concept seems strictly more powerful than the Jenkins equivalents, albeit with a somewhat steeper learning curve.

8
herge 2 hours ago 1 reply      
We use jenkins for CI, but only on our QA server. I haven't found a way to do CI without giving Jenkins full credentials on our server.

Jenkins is very powerful, but I would not trust it (or any of the myriad plugins we have installed) to not have security holes.

9
muyfine 17 minutes ago 1 reply      
This article feels like it's from years ago. Using Jenkins today is an anachronism - poor UI, poor configuration, poor scalability, poor distribution. My current employer has poured so much well intentioned effort into Jenkins' black hole. Never again.
10
stuff4ben 40 minutes ago 0 replies      
The question appears to be "how can we abuse Jenkins today"? For small teams and orgs, Jenkins is fine, but scales horribly.
11
pilom 2 hours ago 0 replies      
In addition to CI, we use it for all of our Selenium testing and for jobs that would otherwise just be cron jobs. The UI for logging and built in email alerting when jobs fail are invaluable.
12
chizzl 2 hours ago 4 replies      
Does anyone consider Jenkins as a replacement to CRON?
13
falcolas 2 hours ago 0 replies      
I hope your Jenkins server is well protected, if you're running so many things through it. One [JVM] to rule them all.
More Responsive Tapping on iOS webkit.org
57 points by cheeaun  4 hours ago   18 comments top 5
1
jordanlev 2 hours ago 4 replies      
Standard accessibility disclaimer to people developing websites: please do not set "user-scalable=no" in the viewport meta tag, as this prevents users from pinch-zooming the page.

Many designers think "there's no need to allow zooming", but this is often coming from people who are blessed with youth and/or great eye-sight. A lot of people (especially as we get older) need to be able to zoom in to read things, or we just want to zoom in on images to be able to see more detail (especially graphics that have text in them).

Fortunately, you don't need to set "user-scalable=no" in order to reap the benefits of the "no tap delay" (thank you to the webkit team for hearing people's feedback about this and changing course from their original plan which was to only disable tap delay when page wasn't scalable).

2
untog 2 hours ago 0 replies      
Long, long overdue. While it is true that the web is slower than native apps, a lot of people's perception of slowness is directly attributable to this delay on click events.
3
nipponese 2 hours ago 1 reply      
Why did it take eight years to implement this work around? This would have been useful in 2007 after SJ, himself, told us to make web apps in leu native apps for the then non-existent App Store.
4
andy_ppp 2 hours ago 0 replies      
Fantastic! Now if you could fix sending us the scrollTop accurately during momentum scrolling (inside requestAnimationFrame is fine) that would be amazing!
5
TruthSHIFT 2 hours ago 1 reply      
This sounds awesome. When are these changes coming to iOS?
Calculating integer factorials in constant time using overflow behavior microsoft.com
6 points by ingve  1 hour ago   1 comment top
1
Analemma_ 8 minutes ago 0 replies      
Awww, they updated the appearance of Raymond Chen's blog? I kinda liked the defiantly, unapologetically old-fashioned look it had before (like Jamie Zawinski's site).
Software Development Has Diseconomies of Scale allankelly.blogspot.com
107 points by gatsby  7 hours ago   56 comments top 27
1
Htsthbjig 4 hours ago 1 reply      
Normally EVERYTHING has both economies and diseconomies of scale.

You model the price per unit as the sum of different curves.

Complexity not only increases on Software, but if you design a thermal engine, or a plane, or a car.

Working making something as simple as fiberglass, we had something like 100 components, like tensioactives. Most of them we had no idea what they were for, as they were added like decades ago by someone who new.

Nobody wanted to remove a given component and be responsible for the fiber breaking and stopping the line, incurring on tens of thousands of dollars in penalties, so new complexity was added, but not removed.

In my experience, software is the thing in which YOU CAN get the most economies of scale possible, because you do not depend of the physics of the world. But you need to control complexity as you develop.

In the real world, you create a box because it is the only way of doing something, and the box automatically encloses everything that is inside. You can't see inside, nor want to. It is a black box that abstracts your problems away.

In software you have to create the boxes. Most people don't do it, with nefarious consequences.

2
tedmiston 21 minutes ago 0 replies      
Every time software economies of scale come up, I can't help but be reminded of Jira's pricing model (https://www.atlassian.com/software/jira/pricing?tab=host-in-...):

 (Per month:) UsersTotal Per user ----- ------- -------- 1$10$10 5$10$2 10$10$1 15$75$5 25$150$6 50$300$6 100$450$5 500$750$2 2000$1500$1

3
aplorbust 5 hours ago 4 replies      
Bootloaders are small, but very important software.

k/q is small but a very useful interpreter.

There are so many examples, but it appears that to "the market" the most valued software development is large scale.

The sentiment is create and contribute to large projects or go home. Stupid, but true.

"Do one thing well" is more than just a UNIX philosophy. It is an essential truth. Most programs are lucky if they can do one thing "well". How many so-called "engineers" are afraid to write small, trivial programs lest they be laughed at?

Large programs often become liabilities. Can we say the same for small programs? If it happens, write a new one.

Maybe a user with an unmet need would rather have a program that does the one thing they want as opposed to one program that can allegedly do everything... whereby they are granted their wish through addition of "features". More internal complexity. And majority of users only using a fraction of the program's feature set. Waste.

4
stillsut 30 minutes ago 0 replies      
If you had to build a windows GUI from assembly code, almost all software projects would be too expensive. Instead we reuse high level languages and frameworks to start with the basics a program needs.

To extend the metaphor to milk, what if the milk industry had to invent the glass industry in order to make the bottles which it comes delivered in? Consumers would have cows not refrigerators.

The dis-economies-of-scale-software are programs where normal glass simply can't be used to hold the milk. A whole new custom type of glass has to be developed. And this usually for a type of milk only like 1,000 people even drink it.

5
RyanZAG 5 hours ago 2 replies      
Software has economies of scale in distribution. In fact the economies of scale of software are the key point of how software businesses are causing disruption. A single software program can be replicated infinitely at zero cost and allow anybody who has 1 liter of milk to have 1000 liters of milk at no additional cost. So in the author's example, software would be the same price for both 1 and 2 liters.

Complexity is something completely different and is well known in all products. I can design a calculator that adds numbers very easily. A calculator that does fractions is much harder to design and costs more. A car with a more complicated engine is much harder to build than a simple engine. This has nothing to do with the actual economies of scale of the calculator or car or you could say that cars have dis-economies of scale too - and obviously they don't. They're the poster child for economies of scale.

Building a truck that is 10km long is worse than building 100 trucks that are each 100m long, but this has nothing to do with 'diseconomies of scale' inherent in trucks.

6
tedmiston 26 minutes ago 0 replies      
> Four, get good at working in the small, optimise your processes, tool, approaches to do lots of small things rather than a few big things.

Why, I think I've heard that before...

"Do One Thing and Do It Well" from https://en.wikipedia.org/wiki/Unix_philosophy

7
adrianN 5 hours ago 2 replies      
In software you pay for complexity. Big software is more complex than small software (by definition!) so it's more expensive.

However, managing systems of small software also incurs complexity, the smaller the software components the harder you have to work to make them play together.

It's often not clear a priori whether it's worth to pay a lot more up front to get a monolithic solution or to try and glue together many simple tools.

8
rbrogan 1 hour ago 0 replies      
The best part of the article is the concrete image of the milk cartons. On first seeing the image, your mind is going to tend to think things ought to be one way. Then it comes out and says, "No, it is the opposite." That creates a bit of cognitive dissonance and makes one ask: "Wait, why?" This is good as far as software goes, because it is so abstract that often the brain is not fully engaged when talking about it. It is too easy to know something in the abstract, but then not know it enough to apply it in the concrete.
9
pieterr 39 minutes ago 1 reply      
> And if you dont know, the UK is a proudly bi-measurement country. Countries like Canada, The Netherlands and Switzerland teach their people to speak two languages. In the UK we teach our people to use two systems of measurement!

The Netherlands? We only speak Dutch here. :-)

I guess the author means Belgium, where they speak (at least) two languages: Vlaams and French.

10
ahvetm 4 hours ago 0 replies      
It's a difficult analogy to make, in particular when you forget to consider the ocean of milk underneath you from all the libraries and frameworks you are using. Then the difference between 4 small bottles or 1 big bottle seems less significant.
11
abrgr 4 hours ago 0 replies      
As RyanZAG says, all production has diseconomies of scale in unit complexity. The production process has economies of scale, meaning that churning out more units of equivalent complexity reduces the marginal cost of churning out the next unit of equivalent complexity.

Software exhibits this same economy of scale in production. Take Google's machine learning platform. They allow multiple functional teams to churn out roughly-equivalently-complex machine learning-powered widgets in less and less time. Contrast that with a startup building a single machine learning-powered widget and the marginal cost to Google is significantly lower.

12
jowiar 2 hours ago 0 replies      
From a "Computer Science" perspective: "Economies of scale" is another word for "sublinear growth". Software is, fundamentally, a graph. And the number of edges in a connected graph grows quadratically.

Pretty much any strategy to improve making software at scale, whether code organization or organizational design, is finding ways to limit the complexity of the graph to a constant multiplier of the number of nodes, and keeping that constant small, rather than allowing things to grow quadratically.

13
PeterStuer 4 hours ago 0 replies      
I get the point the article is trying to convey. Scale increases organizational complexity and overhead relatively more than the added manpower contributes. However, the analogy with the 'milk' is far off. First of all, since the 'duplication' cost of software is near 0, buying a single seat or license is more expensive than buying in bulk. With a few exceptions, this can be established by browsing any product or SaaS website. Second, but this is minor, in retail vendors often abuse this expectation pattern and have now started to charge more per volume for the larger packages. The production side of software is more like R&D, and there you find the diminishing returns, as iconified in DeMarco's 'the mythical man-month'.
14
scholia 2 hours ago 0 replies      
I feel conned. But an honest headline -- Software Development Has Diseconomies of Scale -- wouldn't have sounded controversial....
15
vezzy-fnord 5 hours ago 0 replies      
Diseconomies of scale certainly aren't unique to software, and the author sensibly notes "individual knowledge".

One of the main effects of protectionist and interventionist policies has been related to them. A domestic firm starts to rot, unemployment prospects are rising and a sense of national preservation starts to set in. Thus, in the short term, tariffs are levied, subsidies are made and some macro notion of "stability" or "optimality" is reached. The long term costs are the artificial delaying of the onsets of diseconomies of scale with state and business expansion leading to symbiotic interests. Then people complain about Big Business fucking them over.

(The fact that the author quote Keynes makes this all the more ironic. Keynes-the-man wasn't objectionable, but the neoclassical synthesis/"pop Keynesianism" of his disciples Paul Samuelson and John Hicks did influence government policy in a negative way, as noted in James M. Buchanan's Democracy in Deficit.)

16
jeffdavis 2 hours ago 0 replies      
"Much of our market economy operates on the assumption that when you buy/spend more you get more per unit of spending."

Supply and Demand says the opposite. The supply curve slopes upward, meaning that a higher per-unit price is required when the aggregate supply is higher.

Economies of scale apply in some situations, but people generally place way too much weight on them.

17
PaulHoule 5 hours ago 1 reply      
There are many kinds of scale.

Poor performance on military projects is often an issue of huge development costs spread out over a tiny number of units.

Apple spends as much to develop an iPhone as it costs to develop a new weapon system, except they sell millions of the phones so the unit cost works out ok.

18
mrep 5 hours ago 0 replies      
People don't think software has economies of scale. The amount of articles I've seen about the "mythical man month" and such all talk about how hard software is to scale.

What people do think is that the marginal cost of reproducing software is basically zero, regardless of size. This means that choosing between two products, if product 1 has n amount of features, and product 2 has those same exact n features plus an additional feature, all consumers will rationally choose product 2 (lots of assumptions, i know).

This is why companies try to get bigger because if they can offer more features, than all the consumers will choose them and they get all the sales. One could argue that this is the reason why the "power law" effect thats been talked about on HN recently happens.

19
Spooky23 5 hours ago 0 replies      
It depends on your point of view.

The point of software is to deliver value to the business. There's overhead with supporting and integrating each system -- to borrow an analogy from the article, each milk carton needs cardboard, a date stamp, etc. Even if software development productivity drops 75% and delivery cost increases, having one big carton of milk may be more cost effective than supporting 50 smaller, more nimble cartons.

If you want evidence that this exists, consider that SAP and PeopleSoft exist and are thriving businesses. Or that the general ledger of most big financial institutions are running on mainframes with code that's been in production for 30 or more years.

20
osullivj 4 hours ago 0 replies      
Diseconomies of scale apply in wholesale finance too. Put a big order on a limit order book, and you'll exhaust the liquidity and move prices against yourself. Dealers usually offer wider spreads for large trades as they'll have to work off a large position afterwards, and they need compensating for taking the risk of being on the wrong side of a major price move while they have the position.
21
swehner 3 hours ago 0 replies      
Software has diseconomies of scale, but also has economies of scale.

For example, because of context switching: when a developer makes one change it can be pretty easy for them to add another change (everything is already "open").

Other comments here mention distribution and combining small simple tools for something larger.

22
Glyptodon 1 hour ago 0 replies      
To be fair, the first few developers are often more than 1x multipliers. But you definitely reach a team size where additional developers have decreasing marginal value pretty quickly.
23
SagelyGuru 3 hours ago 0 replies      
It is actually a lot worse than that article suggests. There is a history of big government software projects which proved practically impossible to complete on time and on budget or to get them working at all.
24
parsnips 5 hours ago 0 replies      
>Finally, I increasingly wonder where else diseconomies of scale rule? They cant be unique to software development. In my more fanciful moments I wonder if diseconomies of scale are the norm in all knowledge work.

The pop music industry seems to fit the bill.

25
golergka 4 hours ago 1 reply      
Unix command-line tools
26
dragonwriter 5 hours ago 2 replies      
The argument the author makes is really that software development and maintenance has diseconomies with the scale of projects and releases (basically that development and maintenance output needed scales superlinearly with complexity and output scales sublinearly with team size), which seem to be fairly widely.accepted observations in the field.

There some effort to portray this as unusual compared to other industries through a direct comparison to retail costs of larger grocery goods and manufacturing economies of scale, but that's somewhat missing the point. Product development and engineering probably faces similar diseconomies in non-software domains (the same complexity issues and human factors issues that effect software development are present) and, OTOH, actually delivering units of identical software (or services provided via software in the SaaS world) have similar (perhaps more extreme in some cases) economies of scale as are seen in many areas of manufacturing, as the marginal costs are low and more units means that the fixed costs divided by units sold goes down.

27
TheOtherHobbes 3 hours ago 3 replies      
It's not a brilliant article.

Software is not like milk. That analogy is facile and stupid.

Software should be more like civil engineering, where it's normal to unleash a big team on a big infrastructure project and still have some hope that costs and deadlines stay under control. Or maybe like movie making where there's a cast of thousands, the time is huge, and the costs are epic, but some projects stay under control - while others don't.

It's maybe more interesting to wonder what's different about software than to look for enlightenment on supermarket shelves. Because the problems stated - multiple communication channels, mistakes in modelling and testing - are handled just fine in other industries.

The crippling issues are that you can't model software, and there's not much of a culture of formal specification.

So you can't test software until you build it, requirements may change iteratively, the latest technical "solutions" often turn out to be short-lived fads, and you're always balancing between Shiny New Thing and Tarpit of Technical Debt. That's why it's hard to build. You have to build your cathedral to see if it stays up when it rains. You can't simulate it first. And even if it stays up it may be the wrong shape, or in the wrong place.

It doesn't help that management often sees software as a cost centre instead of an engine room, and doesn't want to pay a realistic rate for quality, maintainability, and good internal documentation.

Having too many people on a project is not the problem. The problem is more usually having no idea what you're doing, why you're doing it, or how you want it done - but believing that you can throw Agile or Six Sigma (etc) at it to make it work anyway, because Management Theory.

The Traveling Salesman with Simulated Annealing, R, and Shiny toddwschneider.com
52 points by sebg  5 hours ago   10 comments top 2
1
teps 4 hours ago 3 replies      
I don't understand how the simulated annealing is helping.

I quote the explanation of step 4:

 If the candidate tour is worse than the existing tour, still maybe accept it, according to some probability. The probability of accepting an inferior tour is a function of how much longer the candidate is compared to the current tour, and the temperature of the annealing process. A higher temperature makes you more likely to accept an inferior tour
Why would you need a simulated annealing for a seemingly so simple function?

2
sandGorgon 3 hours ago 2 replies      
A little bit off topic, but has anyone run a r based api in production? What do you guys use - something like renjin...or do you put up reserve behind a connection pool ?

It's really hard to find something around this.

Milwaukee Protocol wikipedia.org
12 points by philangist  1 hour ago   1 comment top
1
earless1 33 minutes ago 0 replies      
I learned about this via a RadioLab podcast episode. interesting listen http://www.radiolab.org/story/312245-rodney-versus-death/
Bash Academy bash.academy
293 points by obeid  6 hours ago   80 comments top 25
1
alexis-d 5 hours ago 3 replies      
This is a WIP (hence why there are drafts and todos). This is being done by the folks at http://mywiki.wooledge.org/BashGuide which is definitely a valuable resource if you're trying to learn Bash and its idiosyncrasies.

Another good resource is http://wiki.bash-hackers.org/.

2
stevebmark 24 minutes ago 0 replies      
This is an interesting project (I'm all for approachable learning) but it seems to be missing almost every chapter...maybe not ready for the spotlight?

Bash scripting and its array of tools is a poorly designed language. Writing a non-trivial program, even for an experienced developer, is a painful process. The syntax is uneven, hard to read, and easy to get horribly wrong. I would say mastering Bash has diminishing returns past the intermediary. Any time you need to write a non-trivial program, you will save time and life expectancy from stress management by using ANY other language, even Perl or C. Writing complex shell-modifying code in my .bashrc has been one of the more tedious and non-rewarding parts of my life.

3
raboukhalil 4 hours ago 1 reply      
The first few chapters look very good, best of luck with the rest!

In case anyone here is interested in more reading material, I recently wrote a small book about Bash that could be helpful: https://gumroad.com/l/datascience

To make sure it didn't read like a manual, each chapter is an "adventure", where I show how to use only command line tools to answer questions such as: What's the average tip of a NYC cab driver? Is there a correlation between a country's GDP and life expectancy? etc

4
sciurus 3 hours ago 1 reply      
If you want feedback on the quality of your shells cripts, shellcheck is a great tool. You can run it locally or use http://www.shellcheck.net/

https://github.com/koalaman/shellcheck

5
desireco42 5 hours ago 0 replies      
This is much needed. We essentially have all this software we deal with daily and many people don't know basic things about it, not just bash or zsh... and that is funny, people install zsh because it's the thing to do, but you see that they don't know why.
6
digital43 6 hours ago 1 reply      
It'd be cool to see some kind of interactive exercises like (Vim Interactive Guide) http://www.openvim.com/ and (Git) https://try.github.io/
7
headcanon 5 hours ago 1 reply      
Thanks for making this! I wish I had a guide like this when I was starting out. Does anyone know if there is something like this for zsh? I'd imagine there would be a lot of similarities, but some notable differences.
8
massysett 5 hours ago 9 replies      
If you are writing functions in Bash, your task is probably sufficiently complex that it would benefit from being written in a language other than Bash.
9
dsugarman 2 hours ago 0 replies      
In hopes that the creator does read this, it looks like a great resource but I can't read the text, my eyes are very strained and I got a headache very quickly.
10
wickchuck 5 hours ago 0 replies      
I know this is still a WIP, but I found the examples tough to read with colors chosen. Should add does look like a fantastic resource though!
11
codemac 2 hours ago 2 replies      
Use rc instead. My life has gotten so much better since I gave up on other shells.

http://tobold.org/article/rc

http://github.com/rakitzis/rc

12
mirchada776 5 hours ago 0 replies      
13
baby 6 hours ago 0 replies      
Game doesn't work
14
ausjke 3 hours ago 1 reply      
http://guide.bash.academy/03.variables.html#toc7 in this page those block-diagram looks nice, how is it made? some markdown enhancements like mermaid of plantUML?
15
uxcn 4 hours ago 0 replies      
I've started trying to stick to dash syntax for shell scripts.
16
renlo 2 hours ago 0 replies      
The site is unusable for me because of lag from the parallax. It's always on.
17
zackify 3 hours ago 0 replies      
We need the zshell academy
18
jonesb6 1 hour ago 0 replies      
I... really need this.
19
peterwwillis 4 hours ago 1 reply      
Bash is incredibly useful, and I think more people should use it as a cross-platform default scripting language. That said, for the most compatible shell scripting, learn Bourne shell scripting. https://en.wikipedia.org/wiki/Bourne_shell
20
brianzelip 5 hours ago 0 replies      
Folks, it's most likely the case that the author behind bash.academy did not post the link to HN, so ease up on the flames.

I for one am enjoying reading through the informative guide.

Nice job on the author for deploying Prose.io for community editing of the guide.

21
fazza99 5 hours ago 0 replies      
please test the site before releasing it.
22
iamroot 6 hours ago 1 reply      
this is, for the most part, non-functional and incomplete
23
nomadictribe 6 hours ago 0 replies      
Seems like everything after 'Variables and Expansion' isn't finished yet, hence the emptyness.
24
giis 5 hours ago 1 reply      
Looking at the github page https://github.com/lhunath/bash.academy Its been there for 2 years. and last commit was 3 months back.
25
AdmiralAsshat 5 hours ago 3 replies      
As opposed to improper bash? Get off your high horse.

There is very much a bad way of doing bash. When I first started doing bash scripts, most of them looked like this:

cat file | grep string

cat file | wc

cat file | while read line

Multiple problems there. Then there was my initial attempts at finding files in a directory:

for i in `ls *`

This is when I learned about globbing.

There is enough variance in how things can be done in Bash with varying degrees of effectiveness that Google even has a Shell Style Guide:https://google.github.io/styleguide/shell.xml

Flint, MI: So much lead in childrens blood, state of emergency declared washingtonpost.com
368 points by uptown  5 hours ago   245 comments top 20
1
rickdale 4 hours ago 9 replies      
I'm glad to see this on HN. Flint is often a forgot about place in the world. I grew up there and now live outside of Flint. My dad was murdered there.

But I always think Flint is prime for opportunity. The people need basic essentials, water, food, shelter. But the infrastructure to build factories is there. Power, train lines, the whole deal. It's really a shame. The sad part is, the people are still hell bent on supporting the companies that destroyed the town. Michigan in general is like this, its why they don't allow Tesla vehicle sales.

Growing up my family owned a junkyard and the Flint river ran behind it. It was disgusting. Some of the guys would wade through it on their way two and from work. It was a shortcut, but you had to be a true animal to go that route.

2
russdill 4 hours ago 3 replies      
The one thing I don't see is the lead levels of the water supply. Doesn't the EPA have limits on that and isn't it an easy thing to test?

It is true that different water supplies will have different levels of contaminants (lead, arsenic, etc) but can all be within EPA limits. Switching to a water supply with a higher level of contamination will increase exposure. The medical study seems to look at the percentage of children below 5g/dL before and after the switch. It goes from 2% to 4%. So with the old water supply, a certain percentage of children were already being exposed to elevated levels of lead. Switching to a water source with higher lead levels will push more children who are being exposed to lead through other sources to above the 5g/dL mark. However, this would seem to indicate that the primary source of lead for these children above 5g/dL is something other than the water.

3
yummyfajitas 3 hours ago 7 replies      
So Flint has failed to govern itself - hardly the first time - and now children are poisoned. The city apparently now expects the rest of the country to pick up the tab for the cleanup of their mess.

At some point it should become necessary to recognize and acknowledge that self-government has failed and must end. I'd suggest some form of a city death penalty - declare the city dead and give the locals a one-time offer of relocation assistance to an approved list of better places. The city government, and anyone who remains, are officially on their own.

We've known Flint (and many similar cities) are doomed for decades. Why do we keep them alive as zombies rather than just help the humans and let the municipalities die?

4
nashashmi 3 hours ago 4 replies      
I just took a look at the Map of Michigan. I realized after zooming into Flint to try to understand where the water was coming from that Michigan has many, many bodies of water scattered all around the place. Plus they are right next to the world's biggest lakes.

And yet they never took care of their water supply? The one state with so much fresh water has little regulation on keeping water protected.

I keep wondering why its been prophecied that the world in the end will wage war over water, not oil. And now I am beginning to understand.

5
a3n 4 hours ago 4 replies      
> Through continued demonstrations by Flint residents and mounting scientific evidence of the waters toxins, city and state officials offered various solutions from asking residents to boil their water to providing them with water filters in an attempt to work around the need to reconnect to the Detroit system.

Can you boil lead out of water, or does it just become more concentrated?

6
jostmey 2 hours ago 1 reply      
Someday Silicon Valley may be left in the same disarray and disrepair. Jobs can be outsourced and bright people lured away to work on new things.
7
ionforce 4 hours ago 4 replies      
What institutional failure led to this? It seems like this has been a long time coming. Why has the leadership of the area allowed this to happen?
8
golergka 3 hours ago 1 reply      
The fact that this kind of isse will generate publicity after just a year, and that citizens will actually care enough to fight for their rights, and that mayor will feel fallout because of that it makes me feel so jealous of US.

Americans that cry about how the system "doesn't work" really don't have a clue about how this would turn out in other countries.

9
jhallenworld 2 hours ago 0 replies      
I've been trying to understand what the heck happened, since pH management has been standard part of water treatment forever. I mean did they not bother to consult with any water supply engineers first?

It all looks like a game between Emergency Managers appointed by the governor to see who can save the most money fastest.

http://www.freep.com/story/news/politics/2015/10/24/emergenc...

10
artlogic 3 hours ago 0 replies      
If you are interested in a detailed breakdown of everything that's been happening over the past year or so, I would suggest reading Michigan Radio's excellent coverage: http://michiganradio.org/term/flint-water

Full disclosure: my wife works as a reporter Michigan Radio, but generally doesn't cover Flint.

11
usefulcat 4 hours ago 1 reply      
Was looking at a map of Flint and noticed that the City of Flint Water Plant is right next to three metal scrap yards.

https://www.google.com/maps/place/43%C2%B003'25.2%22N+83%C2%...

12
cakes 4 hours ago 0 replies      
This story has been building up and up for a while now, Michigan Radio has several stories/reports/etc.

http://michiganradio.org/term/flint-water#stream/0

13
elorant 4 hours ago 6 replies      
The article failed to explain how the river got so toxic in the first place.
14
rayiner 4 hours ago 5 replies      
What led to this particular situation was apparently rate hikes in the Detroit water system, which caused Flint to switch to using the Flint river as their water source last year. Beyond that, water systems all over the country are in bad shape. Because water rates are subject to public control, they are far too low and there is a huge under-investment in water systems:http://www.infrastructurereportcard.org/a/#p/drinking-water/...
15
cowardlydragon 2 hours ago 0 replies      
So... Flint is the new libertarian dreamland where no regulation exists?
16
whitehat2k9 4 hours ago 0 replies      
Hmm, so in addition to their existing problems with acute lead poisoning, they now have to deal with chronic lead poisoning.
17
purephase 3 hours ago 1 reply      
Does anyone know the extent that the surrounding townships would be impacted by this? My parents live just outside of Flint, but the article only mentions that Flint is impacted.
18
EliRivers 4 hours ago 3 replies      
I particularly like the comments to that article stating that only "liberals" believe the water supply is heaving with lead. The ridiculous political bun-fight infects everything, it seems. It's a mental disease.
19
paulajohnson 3 hours ago 1 reply      
So in ten years time someone is going to kill someone and blame it on the lead he was poisoned with when he was a kid. What would the just result be in such a case?
20
twoquestions 3 hours ago 1 reply      
Why should the Michigan state government care about this? Flint is a bunch of liberals, and the State government is Republican.

The Snyder administration will certainly pay a heavy price for "giving free handouts" to the Democrats in Flint, all to remedy a problem that many Republicans don't believe exists.

EDIT: wording

Carrier Hotels Are Sexy Again datacenterfrontier.com
15 points by 1SockChuck  2 hours ago   3 comments top
1
aaronem 59 minutes ago 2 replies      
For those like me who'd never heard the term "carrier hotel", it's what is (in the US perhaps much more commonly) also known as a colocation center.
Real-time visualisation of orbiting satellites agi.com
59 points by JosephRedfern  5 hours ago   22 comments top 9
1
bluehawk 29 minutes ago 3 replies      
I'm curious, at what appears to be GEO (Geostationary Earth Orbit) there is a ring of green "operational" satellites that align with the equator, then there is a "belt" of non-operational satellites that seem to have spread out from them.

1. Is this because the non-operational ones can no longer station keep and slowly spread out?

2. Why are they in a belt shape?

3. Why is the belt not centered on the green ring? They seem to be all "moving" in the same direction? (When I looked at it, their orbits tend to "dip" south while above the western hemisphere and north while above eastern hemisphere)

2
not_that_noob 20 minutes ago 0 replies      
Does it include military/spy satellites? If not, then there's likely even more satellites both operational and non up there.

[Edit] Scanning the skies above Russia and China for satellites with a non-specified mission doesn't bring up a single US satellite. So it appears that this data does not include spy satellites.

3
TeMPOraL 1 hour ago 0 replies      
Holy shit, we have a lot of junk out there. It seems like we're building a multi-layered defense shield against alien invasion - the outer band will damage the enemy staging their attack from the Moon, and the inner band shall make it impossible to keep their motherships in low-orbit for continued planetary assault.
4
ben174 22 minutes ago 0 replies      
If you click the 'X' next to ComSpOC - removing the filter - there are a TON more satellites. What is that filter and what are all the additional satellites?
5
ulkesh 3 hours ago 1 reply      
"Real-time" is a bit relative (wink) when it takes minutes to get the data into the browser.
6
eridal 3 hours ago 2 replies      
Anybody knows what "nonoperational" means. Are those simply shutdown collecting space dust?

Leaving aside if it's allowed/legal .. I wonder if it's possible to establish communication with a nonoperational satellite, and what tools are required to do so

7
javiramos 38 minutes ago 1 reply      
Can someone explain why there are so many satellites in a cylindrical section far from the earth and not much in-between it and the earth?
8
lordnacho 1 hour ago 1 reply      
There's only a few weather satellites. That surprises me. You'd think there was a lot of demand for something like that?
9
thomasdd 4 hours ago 1 reply      
I am amazed by the Nonoperational/Operational ratio.
EMS: Shared Memory Parallelism for Node.js github.com
48 points by lkgnmlkrewmgre  5 hours ago   10 comments top 5
1
khgvljhkb 3 hours ago 3 replies      
I'd rather have immutable data, and no shared state in concurrent apps, but to each and their own.

Just remember that there is no big conceptual difference between blocking and locking, which is what you end up doing when having shared mutable state.

Recommending anyone to check out CSP (like in Go & Clojure/script (the latter also with immutable data)) or Actors (like in Erlang, Elixir).

2
m_eiman 4 hours ago 0 replies      
It's fairly obvious that this will followed by XMS.
3
amelius 4 hours ago 1 reply      
Can we have immutable datastructures in a multithreaded NodeJs please? It is simple to implement (because immutable), and would solve a lot of problems.

NodeJS works nicely with nonblocking IO, except what most people seem to forget is that the CPU is a resource too, which is still being blocked by NodeJS when handling any event. Multithreading would help alleviate this.

4
MCRed 41 minutes ago 0 replies      
Why not just use Elixir to begin with? Ok, Node existed before Elixir, but erlang has been around a long time.

It seems to me that engineering has become very cargo-cultish. "Lets use node cause we already know javascript" seems to be an argument that people who never learned Java or C/C++/Objective-C or Go or even Python or Ruby would make. Ok, there's a lot of those people... now they are stuck in a monolithic (eg non-distributed) system and dealign with scaling problems.

I'n not saying Erlang is always the right answer (I'm a fan of Go at the moment)... just that there's too much hopping-on-the-bandwagon based on seemingly a lack of awareness of the technology that's out there.

Imagine if all the effort making node work had been put into existing choices.

5
ndesaulniers 3 hours ago 1 reply      
Do you need more than atomics to build locking primitives?
Apple Opens Laboratory in Taiwan To Develop New Screens bloomberg.com
8 points by jweir  1 hour ago   1 comment top
1
Kor-Chung_Tai 1 hour ago 0 replies      
interesting ... thanks for the sharing....
Inflammation: Medicine's burning question newyorker.com
127 points by matco11  10 hours ago   99 comments top 7
1
boothead 9 hours ago 12 replies      
It's great to read things like this... I wish my GP would do the same.

I recently had a really high cholesterol reading (both total cholesterol and LDL). Everything else (blood pressure, blood glucose etc) seems fine. I'm 38, fit and healthy and nothing that suspect in my family history. I found the attitude of my doctor in all this quite surprising. It amounted basically to "You definitely have familial hypercholesterolemia. There is no other possible option here other than statins". No further questions about what I eat, my stress levels, lifestyle - nothing.

What disappoints me the most here is now that I feel like it's all on me to determine what my real risk levels are and what's appropriate to treat this. I don't subscribe to the mainstream NHS view still heavily pushed that eating saturated fat -> high cholesterol == unhealthy as I think it's a lot more complicated (as this article shows). I don't like being is this situation as I'm as susceptible to human bias as the next person, and I'm not a doctor, however almost all of the high quality, science based writing I've read indicates that the mainstream healthcare system's view on cholesterol is wrong.

2
carbocation 7 hours ago 7 replies      
Somehow LDL-cholesterol has come up in this thread about inflammation.

The article here offers a highly speculative opinion regarding the role of inflammation across many diseases. The luminaries in cardiology cited in this article ran trials which many of us consider to show that, rather than inflammation being important, any reason to start statins is a good reason.

The genetic data currently supports very little role for inflammation in important diseases like coronary artery disease, whereas there is crystalline evidence supporting the connection between LDL-cholesterol levels and mortality. Interestingly, when genetics are invoked and mere epidemiology is reassessed, there is no clear atheroprotective role of HDL cholesterol.

The treatment of statins is very much like the treatment of vaccines: dismissed in a pseudo-intellectual manner by people who know a lot (just not about the subject at hand).

3
snowwrestler 4 hours ago 0 replies      
Inflammation will probably turn out to be some combination of a fad and a real phenomenon that is simply a symptomatic mask for a host of different underlying issues. For example an allergy to dairy and a GI infection are different conditions but both present with inflammation.

And by the way, we don't really know why allergies develop at all. The best treatments simply tire them out, or suppress the symptoms. The immune system in general is poorly understood, so perhaps it's not surprising that people have trouble thinking past "inflammation" in general.

The idea that the human body has some pervasive fault or malfunction that can be addressed by adherence to an ascetic diet is not really new. One can find similar accounts going back hundreds of years in Western medicine, and even farther back in religions. For some reason, our minds seem to incline that way.

And it's true to some extent: obesity makes almost any disease worse, and obesity can be avoided or reduced by an ascetic diet. That is true for almost every human being, which is the highest standard that medical advice can meet.

Unfortunately, most diet advice does not meet that standard. By which I mean, it's easy to find counterexamples to most diet advice. A diet might tell you to avoid dairy, but there are millions of people who consume dairy and yet are perfectly healthy. A diet might tell you to take fish oil, but there are millions of people who never do, yet are perfectly healthy. A diet might tell you to avoid gluten, yet there are millions of perfectly healthy people who eat gluten every day. And again--I'm not talking about real allergies here, I'm talking about general diet advice.

The future seems pretty clear to me. We know that each person's genetic code is unique. We know that each person's genetic code is expressed in unique ways due to epigenetics and other factors. We know that each person has a unique collection of gut bacteria, skin bacteria, and other hangers-on.

Ultimately, if we want to create more perfect health, we will need to learn how to collect each person's unique information, tie it reliably to health outcomes, and then introduce highly personalized therapies based on that information.

The demand on information technology will be enormous. This will be a growth industry for humanity for at least the next century, I bet.

4
hackercomplex 27 minutes ago 0 replies      
I think there's something to the acid/base thing and I think it's connected to inflammation.

I recommend a vegan diet. Juice celery, and everything else.. drink it soon after juicing don't put in tha fridge. Juice about a half pound of raw cannabis per week if you can (it's not cured so it won't get you high)

and uh.. drink lots of h2o.

5
satx 8 hours ago 1 reply      
statins-for-cholesterol is a hugely profitable business pushed hard for decades by BigPharma, going back to a single faulty "study" in the 1950s, after Eisenhower's heart attack.

My opinion is that statins-for-cholesterol, obsession with cholesterol is as big of a medical scam, a BigPharma misdirection-for-profit, as BigFood's "low-fat" and "whole grain", and gluten scams.

Cholesterol is essential, so much so 90% is produced by the liver without any dietary consumption.

Cholesterol + lipids + calcium sticking to arteries is a reaction to an injury, mostly from systemic, low-grade inflammation. High blood pressure also injures arteries, also causing cholesterol plaque.

Some people with high cholesterol have no CVD, while some people with low cholesterol die from CVD. Maybe cholesterol isn't the problem?

Aspirin's help with CVD was initially thought to be due it blood-thinning effect, getting blood through narrowed arteries, but then its anti-inflammatory effect was more reasonable. btw, statins are also anti-inflammatory (aspirin and similar are cheaper).

Systemic inflammation reduced by aspirin (or statins), less injury to arteries, less plaque.

Systemic, low-grade inflammation also reduces insulin sensitivity, so the body produces more insulin, which is a really nasty hormone. result? adult-onset Type II diabetes.

So "I think" watching inflammatory bio-markers is more important than watching cholesterol, as one could take away from the New Yorker article.

An alkalizing, anti-inflammatory diet is key, complemented by both moderate resistance work and moderate cardio exercise, which also reduce inflammation.

"life-style" of diet and exercise is your best "Heal Thyself" strategy, not BigPharma.

btw, chronic, systemic, low-grade inflammation causes chronic high-levels of cortisone (derived from cholesterol) to reduce the inflammation, and wreaks havoc on the immune system, which of course causes inflammation as a response to injury or foreign matter.

6
PaulHoule 6 hours ago 0 replies      
I am not really sure that inflammation is a single entity. Today is is fashionable to claim everything has something to do with inflammation, but I remember the time a medical mixup caused me to get a high-sensitivity CRP test after I had just smacked a quadricep muscle enough that I was off my feet and on painkillers for a week and my CRP reading was as low as it can be.
7
drivativ 39 minutes ago 0 replies      
Having had to deal with chronic illness over the last too-many years and how it is handled by the traditional healthcare system, dozens of alternative health modalities, and by numerous DIY interventions (diet, lifestyle, etc) I really get where the OP is coming from on this. My experience with the same struggle of who to trust has led me to just a few basic guidelines:

1. Doctors (traditional, alternative, specialists, etc) are frequently just completely wrong about many things. And I don't use the word "frequently" just to be inflammatory, after seeing more than a dozen about the same issue, most gave contrary advice and opinions which inherently means that most of them are wrong. Often their being wrong was relatively harmless but occasionally it was devastating.

2. Lifestyle often matters. Diet and lifestyle changes were not a cure in this case but they did have a very significant effect, at this point more than any of the several medications tried. There does seem to be a general shift towards understanding and seriously considering epigenetic influences in general but we still seem to be in the dark ages when it comes to how to factor all these things into our medical decisions. There are just tremendous amounts of contradictory data, opinions, etc.

3. Most importantly, everyone is different. And I mean REALLY different. I am not talking about different as in statins only improving outcomes in a small percentage of participants in a study. I mean more like - I love peanut butter but it kills some people. I have seen drugs, supplements, etc that are generally accepted as great things, do tremendous damage. Some people thrive on a vegan diet while others suffer. One drug relieves chronic nerve pain in some and exacerbates it in another. There are very few universally good or bad things when it comes to health (yes, snarky commenters, asbestos is universally bad for anyones health - I mean when it comes to things a doctor/practitioner would recommend to a patient). Literally, one persons medicine can be another ones poison.

In summary, my only advice (which you seem to be following) is that you are the only one who will really look out for yourself and you are the only one who is really an expert on what you are dealing with. The best solution I have come up with is to get as much trusted-ish information as I can process and attempt to triangulate the data and move forward carefully and, honestly, somewhat intuitively. In addition to that, find a practitioner who actually listens, considers your input, and helps you come to reasonable conclusions. In my case, unfortunately, that doctor was about the 12th one (and actually an ND in this case) so hopefully you have better luck. Also, though she has been excellent, I still can't put blind faith in her as she is almost certainly wrong about many things as well but at least she knows and accepts that.

In summary of my summary, don't blindly follow anyones advice. Unfortunately, you have to come to your own conclusions on what to do and who to trust.

NASA Looks to PlayStation VR to Solve Challenge of Space Robot Operation roadtovr.com
11 points by e15ctr0n  2 hours ago   discuss
Qubes OS will ship pre-installed on Purisms security-focused Librem 13 laptop arstechnica.com
142 points by walterbell  12 hours ago   86 comments top 8
1
j_s 3 hours ago 1 reply      
Does this laptop include the (hardware?) modifications required to protect from Intel Management Engine or not? That would be something novel that might justify the higher price.
2
INTPenis 10 hours ago 3 replies      
Since I'm completely surprised by this project and very attracted to it I thought it was best to google around for some perspective. Found this http://www.pcworld.com/article/2960524/laptop-computers/why-...

Among other things. My first question was, is the hardware open? Couldn't find an answer to that.

Edit: Apparently revision 2 of Purism will possibly have Coreboot.

3
clebio 7 hours ago 2 replies      
Is this running multiple, heterogenous OS on one laptop, or multiple, homogenous OS (e.g. linux a l docker) on one laptop?

I've wanted for years to run Windows and Linux on one laptop simultaneously via hypervisors -- not dual-booting, not not-OS-is-host, etc. -- but was of the impression that hardware/IO would not be feasible.

4
feld 5 hours ago 1 reply      
How is Qubes immune to Xen security issues? Slimmed down, only using PVHVM? I'm sure there have still been some CVEs that apply...
5
lamby 8 hours ago 1 reply      
Congratulations to the Qubes project - not sure if they had any input/contact with Purism, but it's a coup either way.
6
jkot 8 hours ago 1 reply      
> Running a dozen VMs or more, as many Qubes users do, can be resource-intensive, so plenty of RAM and a fast processor are essential.

I hoped it would support 32GB RAM in 13" laptop, but maximum is 16GB RAM. Only option seems to be Portege R30 Skylake version (not yet announced), which has two DDR slots.

7
Create 10 hours ago 0 replies      
"We've proposed the business case to Intel and they are evaluating it. I don't think it's likely it's going to happen anytime soon"

Doctorow's Law: "Anytime someone puts a lock on something you own, against your wishes, and doesn't give you the key, they're not doing it for your benefit."

Bull Mountain, Bullrun, Bullsh

8
bechampion 10 hours ago 3 replies      
the base model is 1600 usd? for an i5?It looks pretty neat but i feel like it's over priced right?
       cached 15 December 2015 20:02:02 GMT