hacker news with inline top comments    .. more ..    11 May 2017 Best
home   ask   best   4 months ago   
1
Get started making music ableton.com
1895 points by bbgm  1 day ago   448 comments top 51
1
hxta98596 1 day ago 7 replies      
Anecdotal: there's a few different approaches to learning songwriting that seem to click for beginners. The "build up" approach is the most common and is what this link offers: It first teaches beats, then chords, then melodies and then, in theory, vocals etc. These lessons in this order make sense to many people, but not everyone.

If you're interested in learning to make music and the lessons in the link are confusing or overwhelming or boring, some students find a "peel back" approach to learning songwriting easier to grasp at first. A peel back approach just involves finding a song then teaching by stripping away each layer: start with stripping away vocals, then learn melodies, then chords, then finally learn about the drum beat underneath it all. A benefit of the peel back approach to learning is melodies and vocals are the memorable parts of a song and easiest to pick out when listening to the radio so a student can learn using songs they know and like. Either way, songwriting is hard and fun. Best of luck.

P.S. I think Ableton makes good software and I use it along with FL and Logic. They did a solid job with these intro lessons. But worth mentioning, there is free software out there (this includes Apple's Garageband) that offers key features a beginner just learning songwriting can practice on and mess around on before purchasing a more powerful DAW software like Ableton.

2
djm_ 1 day ago 0 replies      
For those wondering, this is made with Elm lang, Web Audio & Tone.js [1]

[1] https://twitter.com/AbletonDev/status/861580662620508160

3
tannhaeuser 1 day ago 31 replies      
I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

I mean the conventional music notation represents tones in five lines, each capable of holding a "note" (is that the right word?) on a line, as well as in between lines, possibly pitched down and up, resp., by B's and sharps (depending on the tune etc.).

Since western music has 12 half-tone steps per octave (octave = an interval wherein the frequency is doubled, which is a logarithmic scale so compromises have to made when tuning individual notes across octaves) this gives a basic mismatch between the notation and eg. the conventional use of chords. A consequence is that, for example, with treble clef, you find C' in the top but one position between lines, and thus at a very different place than C (one octave below) visually, which is on, rather than between, an additional line below the bottom-most regular line.

I for one know that my dyslexia when it comes to musical notation (eg. not recognizing notes fast enough to play by the sheet) has kept me from becoming proficient on the piano (well, that, and my lazyness).

4
JasonSage 1 day ago 11 replies      
This is some good coverage of the music theory behind songwriting, which is important in making songs that sound good.

However, there's another part of making music which is not covered at all here, which is the actual engineering of sounds. Think of a sound in your head and recreate it digitallyit'll involve sampling and synthesizing, there's tons of filters and sound manipulation to go through, they all go by different names and have different purposesit's a staggering amount of arcane knowledge.

Where is the learning material on how to do this without experimenting endlessly or looking up everything you see? I want a reverse dictionary of sorts, where I hear a transformation of a sound and I learn what processing it took to get there in a DAW. This would be incredibly useful to learn from.

5
2845197541 1 day ago 6 replies      
This seems like the wrong place to start. This seems like the place to start learning a DAW and snapping together samplesto, IMO, make depersonalized unoriginal loop music in a society awash with it because DAW's and looping have created an angel's path to production and proliferation. Learn to drag and drop and you can tell people you meet that you're a musician or a producer. I've met too many mediocre people like this. There should be a disclaimer when this page loads: learn to play an instrument first. Bringing forth music from a physical object utilizes the body as well as the mind, attunes to nuance, and emphasizes that music is primarily a physical phenomenon. It's also just fun and you can jam with or perform for friends. This cut and paste and drag and drop and sample and loop mentality popularized by the rise of hip-hop has lead to an oversaturation of homogeneous, uninspired, unoriginal sound in society. Maybe I'm old fashioned but I think people should spend long, frustrated hours cutting and blistering their fingers for the craft, at least at first. That builds character and will show in your music as you move on.
6
adamnemecek 1 day ago 13 replies      
I'm actually working full time on a new DAW that should make writing music a lot faster and easier. Current DAWs don't really understand music. Also the note input process and experimentation is extremely time consuming and the DAW never helps. Current DAW : my thing = Windows Notepad : IDE. The HN audience is definitely one of my core groups.

If you are interested, sign up here https://docs.google.com/forms/d/1-aQzVbkbGwv2BMQsvuoneOUPgyr... and I'll contact you when it's released.

7
exabrial 1 day ago 1 reply      
Guys if you haven't seen Sonic PI (http://sonic-pi.net/), this is also a great tool! You can write beats using a Ruby DSL and it runs them real time.

I sat down and did this in an hour: https://github.com/exabrial/sonic-pi-beats/blob/master/house...

Sam Aaron is the guy behind the project, he does a lot of ambient type stuff: https://www.youtube.com/watch?v=G1m0aX9Lpts

8
jarmitage 1 day ago 1 reply      
Check out Jack Schaedler who works in this at Ableton https://jackschaedler.github.io/

He even made an interactive essay about the GRAIL text recognizer from the 1960s https://jackschaedler.github.io/handwriting-recognition/

9
fil_a_del_fee_a 1 day ago 2 replies      
I purchased the Ableton Push 2 a month or so ago and it has to be one of the most beautifully engineered pieces of equipment I have ever used. Look up the teardown video. Extremely simple, yet elegant. The Push 1 was created by Akai, and apparently Ableton wasn't satisfied, so they designed and built their own.

https://www.youtube.com/watch?v=YItWQdJgXLs

10
puranjay 1 day ago 11 replies      
I'm an amateur musician and one of the things I hate about electronic music is how "distant" it all feels.

I'm used to picking up the guitar, playing a few chords and writing a melody.

Ableton (or any other DAW) feels like a chore. I have to boot up the computer, connect the MIDI keyboard, the audio interface and the headphones, then wait for Ableton to load, then create a new track and add a MIDI instrument before I can play a single note.

I know the sessions view in Ableton was an attempt to make the music feel more like jamming, but it doesn't really work for me. A lot of musicians who play instruments I've talked to feel the same way.

I would love an "Ableton in a box" that feels more intuitive and immediate.

11
radiorental 1 day ago 2 replies      
Related, this is trending on reddit this morning. Just fascinating to watch someone build a catchy track up on such a (apparently) basic piece of equipment...

https://www.youtube.com/watch?v=FK5cU9qWRg0

12
Mister_Snuggles 1 day ago 3 replies      
As someone who has no musical talent whatsoever, I'm oddly intrigued by Ableton's products. I've occasionally stumbled across the Push[0] and been fascinated by it as an input device.

This site is another thing to add to my Intriguing Stuff list.

[0] https://www.ableton.com/en/push/

13
thatwebdude 1 day ago 1 reply      
Get Started Making Music (In Ableton Live).

Love the simplicity, though it does seem to favor EMD (for obvious reasons).

I've always loved the idea of using Live in a live improvisation context, potentially with multiple instruments having their own looping setup; or just a solo thing. It's hard to find that sort of thing, though.

Checking out Tone.js now.

14
6stringmerc 12 hours ago 0 replies      
Over the years I like to think Ableton has been at the forefront of the digital music community (at least among the pack like Korg), at a special nexus of hardware, software, VST developers, and global sharing by way of an incredibly robust and deep Live Suite program. Seeing the firm continue to reach out and share community resources is habitual for them, and I'm very pleased to see this get all sorts of attention from this community. The intersection of Technology and Art is a bright, multi-cultural future, and with that comes responsibility. To put it in a phrase, this is an example of Ableton providing a ladder up to new members, rather than slamming the door behind them once a certain level was reached. Enjoy!
15
ahoglund 1 day ago 0 replies      
This looks strangely similar to a collaborative app I made last year with Elixir/Elm/WebAudio API:

https://www.youtube.com/watch?v=TCVuLh5Io9A

16
pishpash 1 day ago 0 replies      
To all the people complaining, I feel you. There is not one tool that takes you through the entire workflow of making music well, but they sell software pretending they do support the entire workflow. In truth, you write and arrange in specialized notation software, create samples in specialized synthesis software, or record live audio, then you use audio workstations to fix, edit, transform, and mix. Even there you may rely on external hardware or software plugins. These tools aren't meant for a one-person creator. They mimic the specializations in the music industry. A good all-in-one software simply does not exist, and small teams trying to work on these projects are trying to bite off a real big pie. It's very complex and requires a lot of specialized knowledge, and many of the pieces are probably patent-encumbered, too. But good luck!
17
gcoda 1 day ago 0 replies      
They put Tone.js to good use.Promoting Ableton by showing what cool stuff you can do with free js library that can work in browser, weird?https://tonejs.github.io
18
geoffreyy 1 day ago 0 replies      
The first page of that tutorial reminded me of a product I saw at the Apple store a few weeks ago called Roli. They have a great app [0], but the hardware [1] itself is not ideal but unfortunately necessary to unlock some features... I will be waiting for a v2...

[0] https://roli.com/products/noise

[1] https://www.apple.com/shop/product/HKFR2VC/A/roli-lightpad-b...

19
calflegal 1 day ago 2 replies      
The timing of this post is funny, as just this week I launched a little ear training game built with React an Tone.js: https://www.notetuning.com/
20
tomduncalf 22 hours ago 0 replies      
Off topic, but I posted the exact same link about 24 hours earlier: https://news.ycombinator.com/item?id=14291332

Not that it's important but I'm kinda curious why a. my submission would only get 7 points and b. how it was possible for someone else to submit the same link so soon after and gain the points rather than my submission getting boosted?

It it just random chance/time of day of posting? Or is it because the user who posted this had more points to start with and so was more likely to be "noticed"?

Awesome site in any case!

21
ilamont 1 day ago 2 replies      
I was looking for an app like this for my son. He started with "My Singing Monsters" and some music lessons at school, but when I tried to get him into Garage Band it was too much for a beginner.

Thank you to the creator ... I will show it to him later today. I am not sure how far he can take it, but I like what I have seen so far.

Also, if anyone has other suggestions for music-making apps for tween kids I am all ears ...

22
stevenj 1 day ago 0 replies      
I think the design of this is really interesting.

It's designed in a way to make the user (e.g. anyone who likes music) just want to play with it in a way that's very intuitive via its simple, visual layout. And it provides instant feedback that makes you want to continually tinker with it to make something that you like more and more.

Web development/programming training tool makers should really take note of this.

23
dyeje 1 day ago 1 reply      
Wow this is super high quality content. Props to Ableton. By far my favorite DAW, but I wish they would come out with a cheaper license.
24
meri_dian 1 day ago 3 replies      
I can't speak for other DAW's, but Ableton was really easy for me to pick up as a complete novice to digital music production
25
skandl 1 day ago 0 replies      
This is beautiful and amazing. I love how each step builds on the previous, and uses pop examples to explain theory concepts. I've often wondered so many of the things presented in this, particularly around what common characteristics a genre has with respect to rhythm! Big kudos to the team who built this. I'd love to learn about the development backstory, as this feels a lot like an internal sideproject made by passionate individuals and less like a product idea dreamed up with requirements and specs.
26
alxdistill 1 day ago 0 replies      
Like any technology there can be lots of different inputs and outputs. I think it is safe to say that Roland and the TR808, 909, 303 changed music notation, and music forever, with their popularization of grid based music programming. It may be that Ableton is doing the same with their software. Each year the tools get better to do these sorts of creative activities. The Beatles recorded Abbey Road on a giant 4 track expensive four track owned by a record label. In 1995 I saved up my money from a summer job and bought a 4-track cassette recorder for about $500. Now you can get a four track app for you mobile phone for about $5. Or download an open source one for free.

YAY :)

27
hmage 1 day ago 0 replies      
I noticed many people commenting here think there's only one page.

There's more -- scroll down and click next.

28
bbreier 1 day ago 1 reply      
Myself and two friends have tried to make music production easier (and more robust) on the phone in our spare time, and came up with our iPhone app, Tize (https://itunes.apple.com/us/app/tize-make-music-beats-easy/i...), to that end.

If it sounds like something you're interested in please give it a go! We're always working to improve it and open to feedback. (Android is coming soon)

29
PeanutNore 1 day ago 0 replies      
I've been using Ableton Live for about a week after getting a free copy with the USB interface I bought (Focusrite Scarlett 2i2, highly recommend) and I had to turn to YouTube to figure out how to actually sequence MIDI drums in it.

I use it pretty much solely for recording, but I take advantage of the MIDI sequencer functions to program in a drum beat instead of recording to a click, because I've found my timing and rhythm is so much better playing to drums than it is just playing to a metronome.

30
WWKong 1 day ago 0 replies      
I wanted to build something similar for mobile to make music on the go. I started it here (abandoned now, but code is linked): http://buildanappwithme.blogspot.in/2016/04/lets-make-music....
31
guruz 1 day ago 0 replies      
I think I've watched this video a ton of times: https://www.youtube.com/watch?v=eU5Dn-WaElI

That guy is using Ableton Live to re-create a popular song of The Prodigy.

32
dsmithatx 1 day ago 0 replies      
Did this get voted 1023 points (so far) because, it's a great article or does everyone love music? Btw, I use Ableton after my Pro Tools rig was stolen and, I'm buying a new MatrixBrute. I can't wait to checkout this site.
33
schemathings 1 day ago 0 replies      
If you want to get an interesting take on the 'Live' part of Ableton Live, look for 'Kid Beyond Ableton' videos. He builds up tracks live on stage by beatboxing all the instruments, and uses something called a Hothand recently as his controller.
34
whiddershins 1 day ago 6 replies      
Ableton Live is my main daw. I use it every day, generally for hours, and for a wide variety of purposes.

The most depressing thing about ableton is made obvious in two seconds of messing with that tutorial. A complete disregard for music in the sense of pushing boundaries of time, or doing things that are not tied to any sort of grid, and the sense of music as an emotive form.

So many aspects of music are very annoying or borderline impossible to do in ableton. Yet in all these years, and with so many installations, they just never addressed those issues. Instead they vaguely pretend as if music that would require features they don't have is radically experimental. Which might become true if so many people learn music only through using their software.

Seriously, Ableton. Stop pretending making music is clicking on and off in little boxes. It's embarrassing.

--

Edited to take out the "art" part and put in a couple of more specific criticisms.

35
viach 1 day ago 0 replies      
It reminds me "Generative Music Otomata" http://www.earslap.com/page/otomata.html
36
rubatuga 1 day ago 0 replies      
This is extremely comprehensive for any beginner/intermediate musician/composer, and I'm really impressed at how they managed to implement the content in a mobile friendly manner!
37
clarkenheim 1 day ago 1 reply      
Similar concept using Daft Punk samples instead: http://readonlymemories.com/ plus some filtering and looping capability.
38
ablation 1 day ago 0 replies      
Love it. Great web app from a really good company. I use Ableton a lot and I'm continually impressed with their software and content marketing activity.
39
xchip 14 hours ago 0 replies      
This is AWESOME! Sharing it with all my friends!

Thanks OP!

40
tommynicholas 1 day ago 0 replies      
I used to be a professional musician and I've used a lot of real Ableton equipment and I still found this incredibly interesting and fun.
41
markhall 1 day ago 0 replies      
Wow, this is super impressive. I fell in love after adding a few chords over drums. Amazing.
42
mayukh 1 day ago 2 replies      
Wow, this looks great. Is there an app for this? I'd love for my son to try.
43
octref 1 day ago 0 replies      
Yep, not using hottest framework, not a SPA, not a PWA. Just something that loads fast and works great. Good job.
44
pugworthy 1 day ago 0 replies      
So much for being productive today...
45
gowk 23 hours ago 0 replies      
That's fantastic!
46
moron4hire 1 day ago 0 replies      
This is really awesome. They really went the extra mile on building this out. It even supports multi-touch screens. Very well done.
47
duggalr2 1 day ago 0 replies      
This is amazing!
48
_pmf_ 1 day ago 0 replies      
Amazing presentation. Concentrates on the content, works on mobile[0], no bullshit effects.

[0] within the constraints of Android's embarrassingly crappy audio subsystem

49
hashkb 1 day ago 6 replies      
This is not the basics of making music. It's a super advanced technique using a computer. The real basics involve pencil, (staff) paper, and hard work. Downvotes please.
50
uranian 1 day ago 5 replies      
A more appropriate title would be: Get started triggering samples.

Making music is really something different IMO.

51
greggman 19 hours ago 0 replies      
Am I missing something? I went through all the tutorials and AFAICT there isn't much here. It seemed like "here's a piano. Here's some music made on the piano. Now bang on the piano. Fun yea?"

Is there really any learning here? Did I miss it? I saw the sample songs a few minor things like "major chords are generally considered happy and minor sad" etc... but I didn't feel like going through this I'd actually have learned much about music.

I'm not in anyway against EDM or beat based music. I bought Acid 1.0 through 3.0 back in the 90s which AFAIK was one of the first types of apps to do stuff like this. My only point is I didn't feel like I was getting much learning in order to truly use a piece of software like this. Rather it seemed like a cool flashy page but with a low content ratio. I'm not sure what I was expecting. I guess I'd like some guidance on which notes to actually place where and why, not just empty grids and very vague direction.

2
Net neutrality is in jeopardy again mozilla.org
915 points by kungfudoi  2 days ago   368 comments top 43
1
shawnee_ 2 days ago 9 replies      
Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online. Net neutrality has been called the First Amendment of the Internet.

Not just harder. Infinitely more dangerous. Probably the scariest implications for NN being gutted are those around loss of anonymity through the Internet. ISPs who are allowed to sell users' browsing history, data packets, personal info with zero legal implications --> that anonymity suddenly comes with a price. And anything that comes with a price can be sold.

A reporter's sources must be able to be anonymous in many instances where release of information about corruption creates political instability, endangers the reporter, endangers the source, endangers the truth from being revealed. These "rollbacks" of regulations make it orders of magnitude easier for any entity in a corporation or organization to track down people who attempt to expose their illegal actions / skirting of laws. Corporations have every incentive to suppress information that hurts their stock price. Corrupt local officials governments have every incentive to suppress individuals who threaten their "job security". Corrupt PACs have every incentive to drown out that one tiny voice that speaks the truth.

A government that endorses suppression cannot promote safety, stability, or prosperity of its people.

EDIT: Yes, I am also referring to the loss of Broadband Privacy rules as they have implications in the rollback of net neutrality: https://www.theverge.com/2017/3/29/15100620/congress-fcc-isp...

Loss of broadband privacy: Yes, your data can and will be sold

Loss of net neutrality: How much of it and for how much?

2
guelo 2 days ago 1 reply      
It's insane the amount of comments on HN of all places that don't understand that the end of Net Neutrality is the end of the open web. People that didn't get a peek at Compuserve have no idea the fire we're playing with here. The open web is the most significant human achievement since the transistor and we're about to kill it happily.
3
jfaucett 2 days ago 27 replies      
Does anyone else find the internet market odd? Up until now net neutrality and other policies have given us the following:

1. Massive monopolies which essentially control 95% of all tech (google, facebook, amazon, microsoft, apple, etc)

2. An internet where every consumer assumes everything should be free.

3. An internet where there's only enough room for a handfull of players in each market globally i.e. if you have a "project-management app" there will not be a successfull one for each country much less hundreds for each country.

4. Huge barriers of entry for any new player into many of the markets (no one can even begin competing with google search for less than 20 million).

I think there's still a lot of potential to open up new markets with different policies that would make the internet a much better place for both consumers and entrepreneurs - especially the small guys. I'm just not 100% sure maintaining net-neutrality is the best way to help the little guy and bolster innovation. Anyone have any ideas how we could alleviate some of the above mentioned problems?

EDIT: another question :) If net-neutrality has absolutely nothing to do with the tech monopolies maintaining their power position then why do they all support it? [https://internetassociation.org/]

4
pbhowmic 2 days ago 3 replies      
I tried commenting on the proceeding at the FCC site but I keep getting service unavailable errors. The FCC site itself is up but conveniently we the public cannot comment on the issue.
5
bkeroack 2 days ago 9 replies      
I've written it before and I'll write it again (despite the massive downvotes from those who want to silence dissent): Title II regulation of the Internet is not the net neutrality panacea that many people think it is.

That is the same kind of heavy-handed regulation that gave us the sorry copper POTS network we are stuck with today. The free market is the solution, and must be defended against those who want European-style top-down national regulation of what has historically been the most free and vibrant area of economic growth the world has ever seen.

The reason the internet grew into what it is today during the 1990s was precisely because it was so free of regulation and governmental control. If the early attempts[1] to regulate the internet had succeeded, HN probably wouldn't exist and none of us would have jobs right now.

1. https://en.wikipedia.org/wiki/Communications_Decency_Act (just one example from memory--there were several federal attempts to censor and tax the Internet in the 1990s)

6
rosalinekarr 2 days ago 0 replies      
[The propaganda Comcast is tweeting right now is absolutely ridiculous.][1]

[1]: https://twitter.com/comcast/status/859091480895410176

7
SkyMarshal 2 days ago 1 reply      
Looking at Comcast's webpage on this:

http://corporate.comcast.com/comcast-voices/comcast-supports...

They're arguing that Title II Classification is not the same as Net Neutrality, with the following statement:

"Title II is a source of authority to impose enforceable net neutrality rules. Title II is not net neutrality. Getting rid of Title II does not mean that we are repealing net neutrality protections for American consumers.

We want to be very clear: As Brian Roberts, our Chairman and CEO stated, and as Dave Watson, President and CEO of Comcast Cable writes in his blog post today, we have and will continue to support strong, legally enforceable net neutrality protections that ensure a free and Open Internet* for our customers, with consumers able to access any and all the lawful content they want at any time. Our business practices ensure these protections for our customers and will continue to do so."*

So if Title II goes away, where do those strong, legally enforceable net neutrality protections come from? Wasn't that the reasoning behind Title II in the first place, it's the only effectively strong, legally enforceable way of protecting net neutrality (vs other methods with loopholes)?

8
stinkytaco 2 days ago 6 replies      
Honest question, but is Net Neutrality the answer to these problems?

A few weeks ago on HN, someone made an analogy to water: someone filling their swimming pool should pay more for water than someone showering or cooking with it. This seems to make sense to me, water is a scarce resource and it should be prioritized.

Is the same true of the Internet? I absolutely agree that ISPs that are also in the entertainment business shouldn't be allowed to prioritize their own data, but that seems to me an anti-trust problem, not a net neutrality problem. I also agree that ISPs should be regulated like utilities, but even utilities are allowed to limit service to maintain their infrastructure (see: rolling blackouts).

Perhaps I simply do not understand NN and perhaps organizations haven't done a good job of explaining it, but I don't know that these problems are not best solved by the FTC, not the FCC.

9
smsm42 20 hours ago 0 replies      
That article makes little sense to me. For example:

> Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Big companies are censoring your voice right now! Facebook, Twitter, Youtube and literally every other big provider is censoring online speech all the time. If it's so scary, why nobody cares? If it's not, what Mozilla is trying to say here?

> Net neutrality is fundamental to competition. Without net neutrality, big Internet service providers can choose which services and content load quickly, and which move at a glacial pace.

Internet has been around for a while, and nothing like that happened, even though we didn't have current regulations in place until 2015, e.g. last two years. At which point we start asking for evidence and not just "they might do something evil"? Yes, there were shenanigans, and they were handled, way before 2015 regulations were in place.

> Net neutrality is fundamental to innovation

Again, innovation has been going on for decades without current regulations. What happened that suddenly it started requiring them?

> Net neutrality is fundamental to user choice. Without net neutrality, ISPs could decide youve watched too many cat videos in one day,

ISPs never did it, as far as we know, for all history of ISP existence. Why would they suddenly start now - because they want to get abandoned by users and fined by regulators (which did fine ISPs way before 2015)?

> In 2015, nearly four million people urged the FCC to protect the health of the Internet

Before 2015, the Internet was doing fine for decades. What happened between 2015 and 2017 that now we desperately need this regulation and couldn't survive without it like we did until 2015?

10
wehadfun 2 days ago 1 reply      
Trumps appointees disappoint me a lot. This guy and the one over the EPA
11
Sami_Lehtinen 2 days ago 0 replies      
My Internet connection contract already says that they reserve right to: Queue, Prioritize and Throttle traffic. This is used to optimize traffic. - Doesn't sound too neutral to me? It's also clearly stated that some traffic on the network get absolute priority over secondary classes.

Interestingly at one point 100 Mbit/s connection wasn't nearly fast enough to play almost any content from YouTube. - Maybe there's some kind of relation, maybe not.

12
alexanderdmitri 2 days ago 0 replies      
I think a great thing to do (if you are for net neutrality), is pick specific parts of the NPRM filed with this proceeding and comment directly on it[1] to help do some work for the defense. I feel sorry for anyone who might actually need to address this document point for point to defend net neutrality.

I tried my hand at the general claim of regulatory uncertainty hurting business, then Paragraphs 45 and 47:

-> It is worth noting that by bringing this into the spotlight again the NPRM is guilty of iginiting the same regulatory uncertainty it repeatedly claims has hurt its investments.

-> Paragraph 45 devotes 124 words (94% of the paragraph), gives 3 sources (75% of the references in this paragraph) and a number of figures (100% of explicitly hand-picked data) making the claim Title II regulation has suppressed investment. It then ends with 8 words and 1 reference vaguely stating "Other interested parties have come to different conclusions." Given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

-> There are also a number of extremely misleading and insubstantiated arguments throughout. Reference 114 in Paragraph 47, for example, is actually a hapzard mishmash of 3 references with clearly hand-picked data from somewhat disjointed sources and analyses. Then the next two references [115, 116] in the same paragraph, point to letters sent to the FCC over 2 years ago from small ISP providers before regulations were classified as Title II. Despite discussing the fears raised in these letters, the NRPM provides little data on whether these fears were actually borne out. In fact, one of the providers explicitly mentioned in reference 115, Cedar Falls Utilities, have not in any way been subject to these regulations (they have less than 100,000 customers ... in fact the population of Cedar Falls isn't even 1/2 of the 100,000 customer exemption the FCC has provided!). This is obviously faked concern for the small ISP businesses and again, given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

[1] makes a great point on specifically addressing what's being brought up in the NPRM:https://techcrunch.com/2017/04/27/how-to-comment-on-the-fccs...

13
MichaelBurge 2 days ago 1 reply      
> The order meant individuals were free to say, watch and make what they want online, without meddling or interference from Internet service providers.

> Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Hmm, was it ever prohibited for e.g. some Twitter user to write your ISP an angry letter calling you a Nazi, so they shut your internet off to avoid the headache?

I've only heard about "net neutrality" in the context of bandwidth pricing. It's very different if companies are legally required to sell you internet(except maybe short of an actual crime).

14
vog 2 days ago 1 reply      
It is really a pity that in the US, net neutrality was never established by law, but "just" on institutional level.

Here in the EU, things are much slower and the activists were somewhat envious how fast net neutrality was established in the US, while in the EU this is a really slow legislation process. But now it seems the this slower way is at least more sustainable. We still don't have real net neutrality in the EU, but the achievements we have so far are more durable, and can't be overthrown that quickly.

15
sleepingeights 2 days ago 1 reply      
Many of these articles are missing an easily exploitable position. The key term is "bandwidth" which is the resource at stake. What is being fought over is how to define this "bandwidth" in a way that will be enforceable against the citizen and favorable to the corporation (i.e. "government").

One way they could do this is to divide it like they did the radio spectrum by way of frequency, where frequency is related to "bandwidth". The higher the frequency, the greater the bandwidth. With communication advances, the frequencies can be grouped just like they did with radio, where certain "frequencies" are reserved by the government/military, and others are monopolized by the corporations, and a tiny sliver is provided as a "public" service.

This way would be the most easily enforceable for them to attack NN and the first amendment, as it already exists by form of radio.

* It is already being applied by cable providers through "downstream/upstream" where your participation by "uploading" of your content is viewed inferior to your consumption of it. i.e. Your contribution (or upload) is a tiny fraction of your consumption (or download).

* Also, AWS, Google and other cloud services charge your VPS for "providing" content (egress) and charge you nothing for consuming (ingress). On that scale, the value of what you provide is so miniscule it is almost non-existent to the value of what you consume.

tldr; NN is already partly destroyed.

16
laughingman2 2 days ago 2 replies      
I never thought hackernews would have so many opposing net neutrality. Is some alt-right brigading this thread? which is ironic considering how they claim to care about free speech.
17
pc2g4d 2 days ago 0 replies      
The top comments here seem to misunderstand net neutrality. It's not about companies selling your browsing history---that was recently approved by Congress in a separate bill[1]---but rather is about whether ISPs can prioritize the data of different sites or apps. IIUC net neutrality doesn't really provide any privacy protections, though it's likely good for privacy by making a more competitive market that motivates companies to act more (though not always) in consumers' interests.

1: https://arstechnica.com/information-technology/2017/03/how-i...

18
justforFranz 2 days ago 0 replies      
Maybe we should give up and just let global capitalists own and run everything and jam web cameras up everyone's asshole.
19
FullMtlAlcoholc 2 days ago 1 reply      
Why does anyonw want to give more power to Comcast or AT&T? Neither has hardly ever been described as innovative.. unless you count clueless members of Congress.
20
mbroshi 2 days ago 2 replies      
I am agnostic on net neutrality (ie. neither for nor against, just admitting my own lack of ability to assess its fallout).

I read a lot of sweeping, but hard to measure claims on its affects (such as in the linked article). Are there any concrete, measurable affects that anyone is willing to predict?

Examples might be:

* Average load times for the 1000 most popular webpages will decrease.

* There will be fewer internet startups over the next 5 years than the previous.

Edit: formatting

21
akhilcacharya 2 days ago 0 replies      
It's interesting how much could have changed if ~175k or fewer people in the Great Lakes region had voted differently..
22
notadoc 2 days ago 0 replies      
Is the internet a utility? And are internet providers a utility service? That's really what this is about.
23
em3rgent0rdr 2 days ago 0 replies      
Title II "Net Neutrality" is a dangerous power grab -- a solution in search of a problem that doesn't exist, with the potential to become an engine of censorship (requiring ISPs to non-preferentially deliver "legal content" invites the FCC and other regulatory and legislative bodies to define some content as "illegal").

Title II "Net Neutrality" is also an instance of regulatory capture through which large consumers of bandwidth (such as Google and Netflix) hope to externalize the costs of network expansions to accommodate their ever-growing bandwidth demands. To put it differently, instead of building those costs into the prices their customers pay, they want to force Internet users who AREN'T their customers to subsidize their bandwidth demands.

24
tycho01 2 days ago 1 reply      
I'm curious: to what extent could a US ruling on this affect the rest of the world?
25
kristopolous 2 days ago 0 replies      
Freedom is never won, only temporarily secured.
26
boona 2 days ago 1 reply      
If the internet is fundamental to free speech, maybe it's not a good idea to give it's freedom over to state control, and in particular to an agency who historically has gone beyond it's original mandate and censored content.

When you hand over control to the government, don't ask yourself what it would look like if you were creating the laws, ask yourself what it'll look like when self-interested politicians create them.

27
billfor 2 days ago 0 replies      
I'm not sure putting the internet into the same class of service as a telephone made sense for all the unintended consequences. Everyone is fine until they wind up paying $50/month for their internet and then seeing another $15 in government fees added to their bill. From a pragmatic point of view, I'm sure the government will always have the option to regulate it later on.
28
WmyEE0UsWAwC2i 2 days ago 0 replies      
Net neutrality should be on the constitution. Safe from lobbists and the politician in office.
29
arca_vorago 2 days ago 0 replies      
Someone tell me again why we don't have public internet backbone like we do roads?
30
twsted 2 days ago 0 replies      
It's sad that this article stayed at the first positions for so little time. And we are on HN.

But is this HN folks fault?

At the time of my writing "Kubernetes clusters for the hobbyist" - who thinks it is as important as this one? - with 470 points less, almost 300 comments less, both posted 6/7 hours ago is six positions above.

31
weberc2 2 days ago 10 replies      
I wasn't impressed with this article; it reads like fear mongering. More importantly, I don't think the fix is regulation, I think it's better privacy tech + increased competition via elimination of local monopolies. Do we really want to depend on government to enforce privacy on the Internet?
32
c8g 2 days ago 1 reply      
> Net neutrality is fundamental to competition.

so, I won't get 20 times faster youtube. fuck that net neutrality.

33
1ba9115454 2 days ago 12 replies      
How much of this can just be fixed by the free market?

If I feel an ISP is limiting my choices wouldn't I just switch?

34
M2Ys4U 2 days ago 0 replies      
(In the US)
35
2 days ago 2 days ago 3 replies      
36
jameslk 2 days ago 4 replies      
I don't understand why you're getting downvoted. This is a valid question and shouldn't be downvoted so others can learn from the discussion.
37
jwilk 2 days ago 2 replies      
If you want people to take your article seriously, then maybe you shouldn't put a pointless animated GIF in it.
38
marcgcombi 2 days ago 1 reply      
39
grandalf 2 days ago 2 replies      
40
JustAnotherPat 2 days ago 0 replies      
November 8th was critical to the Internet's future, not today. People made their bed when they refused to get behind Clinton. Now you must accept the consequences.
41
albertTJames 2 days ago 0 replies      
Neutrality does not mean anything should be authorized... international law should allow ISP to submit to judiciary surveillance of individuals if those a suspected of serious crimes, terrorism, pedophilia, black hat hacking, psychological operations/fake news. I don't think because policemen can stop me in the street it is a violation of my freedom.Moreover the article is extremely vague and use argumentum ad populum to push its case while remaining quite unclear on what is really planned: "His goal is clear: to overturn the 2015 order and create an Internet thats more centralized."
42
bjt2n3904 2 days ago 2 replies      
So, which is it HackerNews? Are we OK with companies deciding what gets on the internet, or are we not? On one hand, we laud Facebook et al. for suppressing "fake news", and then we get upset when ISPs do the same.

Furthermore, the FCC has historically engaged in content regulation. Anyone wonder why there's no more cartoons on broadcast television? Or perhaps why the FCC is investigating Colbert's Trump Jokes? If we're so concerned about content freedom, the FCC is not the organization to trust.

43
vivekd 2 days ago 2 replies      
>The internet is not broken, There is no problem for the government to solve. - FCC Commissioner Ajit Pai

This is sooo true. If internet carriers were preferring some kind of content, or censoring or giving less bandwidth to certain content, or charging for certain content - and this was causing the problems described in the mozilla article - then yes - we could have legislation to solve that problem.

What gets to me about the net neutrality movement is that the legislation they are pushing for is based on vague fears and panic. Caring about net neutrality has become some sort of weird silicon valley techno-virtue signaling.

If ISPs start behaving badly or restricting free speech, I would be happily on board to having legislation to address that. This has not happened and there is no evidence that there is any imminent threat of this happening. Net neutrality legislation is a solution to a vague non-existent speculative problem.

3
Uncensorable Wikipedia on IPFS ipfs.io
674 points by bpierre  2 days ago   247 comments top 26
1
cjbprime 2 days ago 11 replies      
Strategically, this (advertising IPFS as an anti-censorship tool and publishing censored documents on it and blogging about them) doesn't seem like a great idea right now.

Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.

We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.

But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.

2
badsectoracula 2 days ago 4 replies      
Correct me if i'm wrong, but if accessing some content through IPFS makes you a provider for that content doesn't that mean that you are essentially announcing to the world that you accessed the content, which in turn can be used by those who do not want you to access it for targeting you?

In other words, if someone from Turkey (or China or wherever) uses IPFS to bypass censored content, wouldn't it be trivial for the Turkish/Chinese/etc government to make a list with every single person (well, IP) that accessed that content?

3
smsm42 2 days ago 1 reply      
Ironically, I've just discovered that https://ipfs.io/ has certificate signed by StartCom, known for being source of fake certificates for prominent domains[1]. So in order to work around censorship, I have to go to site which to establish trust relies on a provider known for providing fake certificates. D'oh.

[1] https://en.wikipedia.org/wiki/StartCom#Criticism

4
k26dr 2 days ago 1 reply      
The following command will allow you to pin (ie. seed/mirror) the site on your local IPFS node if you'd like to contribute to keeping the site up:

ipfs pin add QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX

5
mirimir 2 days ago 0 replies      
Some additional information may help in the duty vs prudence debate. It's true that IPFS gateways can be blocked. But as noted, anyone can create gateways, IPFS works in partitioned networks, and content can be shared via sneakernet. Content can also be shared among otherwise partitioned networks by any node that bridges them.

For example, it's easy to create nodes on both the open Internet and the private Tor OnionCat IPv6 /48. That should work for any overlay network. And once nodes on such partitioned networks pin content, outside connections are irrelevant. Burst sharing is also possible. Using MPTCP with OnionCat, one can reach 50 Mbps via Tor.[0,1]

0) https://ipfs.io/ipfs/QmUDV2KHrAgs84oUc7z9zQmZ3whx1NB6YDPv8ZR...

1) https://ipfs.io/ipfs/QmSp8p6d3Gxxq1mCVG85jFHMax8pSBzdAyBL2jZ...

6
TekMol 2 days ago 3 replies      
How is Wikipedia censored in Turkey? Are providers threatened to be punished if they resolve DNS queries for wikipedia.org? Or are they threatened to be punished if they transport TCP/IP packets with IPs that belong to Wikipedia?

Wouldn't both be trivial to go around? For DNS, one could simply use a DNS server outside Turkey. For TCP/IP packets, one could set up a $5 proxy on any provider from around the world.

7
eberkund 2 days ago 2 replies      
These distributed file systems are really interesting. I'm curious to know if there is anything in the works to also distribute the compute and database engines required to host dynamic content. Something like combining IPFS with Golem (GNT).
8
kibwen 2 days ago 7 replies      
But Wikipedia allows user edits, and so is inherently censorable. You don't need to block the site, you can just sneak in propaganda a little at a time.
9
BradyDale 11 hours ago 0 replies      
Thanks for sharing this... FWIW, I wrote a story about it on Observer.comhttp://observer.com/2017/05/turkey-wikipedia-ipfs/
10
treytrey 2 days ago 4 replies      
I'm not sure this thought makes sense, but just putting it out there for rebuttals and to understand what is really possible:

I assume IPFS networks can be disrupted by a state actor and only thing that a state actor like the US may have some trouble with is strong encryption. I assume it's also possible that quantum computers, if and when they materialize at scale, would defeat classical encryption.

So my point in putting forward these unverified assumptions is to question whether ANY technology can stand in the way of a determined, major-world-power-type state actor. Personally, I have no reason to believe that's realistic, and all these technologies are toys relative to the billions of dollars in funding that the spy agencies receive.

11
Spooky23 2 days ago 0 replies      
Why bother with a technological anti-censorship solution for Wikipedia when the obvious solution is to just attack the content directly.

If a censoring body wants some information gone, just devote some attention to lobbying the various gatekeepers in Wikipedia.

12
DonbunEf7 2 days ago 2 replies      
Isn't IPFS censorable? That's the impression I got from this FAQ entry: https://github.com/ipfs/faq/issues/47
13
y7 2 days ago 1 reply      
Does IPFS work properly with Tor these days? Last I checked support was experimental at best.

Without proper support of an anonymity overlay, using Tor to get around your government's censor doesn't sound like a very wise idea.

14
pavement 2 days ago 3 replies      
Listen, I get that there are other parts of the world experiencing serious "technical difficulties" lately...

But I can only read English! Where's the English version?

https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34is...

This hash doesn't do much for me:

 QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX
How do I find the version I want?

If I can't read it in my language, it's still censored for me.

15
slitaz 2 days ago 1 reply      
Didn't you mean "unblockable" instead?
16
maaaats 2 days ago 1 reply      
When browsing the content, how does linking work? I mean, don't they kinda have to link to a hash? But how can they know the hash of a page when the links of that page are dependent on the other pages and this may be a circle?
17
hd4 2 days ago 4 replies      
Maybe a very dumb question, but why didn't they build anonymity into it rather than advise users to route it over Tor? My guess is it may have something to do with the Unix philosophy. It's still a great tool regardless.
18
LoSboccacc 2 days ago 1 reply      
> In short, content on IPFS is harder to attack and easier to distribute because its peer-to-peer and decentralized.

> port 4001 is what swarm port IPFS uses to communicate with other nodes

uhm.

19
captn3m0 2 days ago 4 replies      
The SSL cert chain is broken for me.
20
amelius 2 days ago 1 reply      
Sounds good, but isn't this a fork of Wikipedia?
21
forvelin 2 days ago 2 replies      
At this moment, it is enough to use Google DNS or some VPN to reach Wikipedia in Turkey. This is good case, but IPFS is just an overkill.
22
awqrre 2 days ago 0 replies      
until they create laws...
23
davidcollantes 2 days ago 1 reply      
Will it be available if the domain (ipfs.io) stops resolving, gets seised or is blocked?
24
nathcd 2 days ago 2 replies      
I'd be really curious to hear more about how Goal 2 (a full read/write wikipedia) could work.

IIRC, writing to the same IPNS address is (or will be?) possible with a private key, so allowing multiple computers to write to files under an IPNS address would require distributing the private key for that address?

Also, I wonder how abuse could be dealt with. I've got to imagine that graffiti and malicious edits would be much more rampant without a central server to ban IPs. It seems like a much easier (near-term) solution would be a write-only central server that publishes to an (read-only) IPNS address, where the load could be distributed over IPFS users.

25
devsigner 2 days ago 0 replies      
Here it is on Archive.is just for good measure and posterity purposes: https://archive.is/GnjGT
26
onetwoname 1 day ago 0 replies      
How about you remove all the lies from wikipedia, the lies curated by CIA. No? Oh, right, I forgot you only make the illusion of justice.
4
CockroachDB 1.0 cockroachlabs.com
642 points by hepha1979  14 hours ago   307 comments top 43
1
dis-sys 13 hours ago 3 replies      
I really like the fact that the CockroachDB team recently did a detailed Jepsen test with Aphyr. The follow up articles from both CockroachDB and Aphyr explaining the findings are very interesting to read. For those who might be interested -

https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-j...

https://jepsen.io/analyses/cockroachdb-beta-20160829

2
rantanplan 4 hours ago 0 replies      
In an era where hot air and hip DB technologies prevail, I'd like to emphasize the fact that the CockroachDB engineers are consistently honest and down to earth, in all relevant HN posts.

This builds up my confidence in their tech, so much so that even though I had no real reason to try this new DB, I'm gonna find one! :D

3
Svenskunganka 11 hours ago 3 replies      
Pardon the nature of my question, but I'm really interested in what your experience has been so far building a database with Go? Has its runtime (the GC for example) posed any issues for you so far? Looking at other RDBMS's, languages with manual memory management like C or C++ seems to be the go-to choice, so what were the reasons you chose Go?

I'm quite frankly amazed that Go's runtime is able to support a database with such demanding capabilities as CockroachDB!

4
vtomasr5 9 hours ago 0 replies      
I think this is the DB Project of the year in the open source community. Cockroachlabs has done an incredible effort to develop and test a new Database and these guys are giving it for free (I read about the series B raise too ;)), for us to use it.

Thanks for doing this. You're very much appreciated.(BTW I love the name and the logo!!)

5
sixdimensional 11 hours ago 2 replies      
How does Cockroach efficiently handle the shuffle step when data is on many nodes on the cluster and has to move to be joined? Does Cockroach need high capacity network links to function well?

I always see companies making the claim of linear speedup with more nodes but surely that can't be the case if the nodes are geographically disjointed over anything less than gigabit links? Perhaps linear speedup with more nodes is only possible over high speed connections? How high is that exactly?

Congratulations to the team on the release! Introducing this kind of database is no easy task - thank you and great job, keep up the good work!

6
wmfiv 12 hours ago 1 reply      
Are there published benchmarks for multi-key operations and more complex SELECT statements? I apologize if I missed them.

I'm trying to determine whether there's a place for Cockroach within what I think are the constraints in the database space.

* Traditional SQL Databases

 - Go to solution for every project until proven otherwise. - Battle tested and unmatched features. - Hugely optimized with incredible single node performance. - Good replication and failover solutions.
* Cassandra

 - Solved massive data insert and retention. - Battle tested linear scalability to thousands of nodes. - Good per node performance. - Limited features.
It seems like many new databases tend to suffer from providing scale out but relatively poor per node performance so that a mid-size cluster still performs worse than a single node solution based on a traditional SQL database.

And if you genuinely need huge insert volumes, because of the per node performance you'd need an enormous cluster whereas Cassandra would deal with it quite comfortably.

7
toddmorey 10 hours ago 1 reply      
There was a great session with Spencer Kimball (CockroachDB creator) and Alex Polvi (CoreOS) at the OpenStack Summit. It's a good overview and demo: https://youtu.be/PIePIsskhrw
8
nhumrich 52 minutes ago 2 replies      
Does the replication work cross-region, say US-East and US-West? or even cross continent? It sounds like the timing requires very short latency and might not work in these scenarios
9
v_elem 12 hours ago 1 reply      
It looks like there is still no mechanism for change notification, which in our particular case is the only missing feature that prevents using it as a postgresql replacement.

Does anybody know if this feature is planned in the short or medium term ?

https://github.com/cockroachdb/cockroach/issues/6130https://github.com/cockroachdb/cockroach/issues/9712

10
sergiotapia 13 hours ago 2 replies      
Is Cockroach DB intended for just "big-data" companies? Would a small project run really well with Cockroach DB?

Of course a small database probably won't need a lot of the unique features, but is this aiming to replace PG/MySQL in the small/mid-size projects?

11
daliwali 10 hours ago 2 replies      
CockroachDB looks like a great alternative to PostgreSQL, congrats to the team for doing so much in such a short time. The wire protocol is compatible with Postgres, which allows re-using battle-tested Postgres clients. However it's a non-starter for my use case since it lacks array columns, which Postgres supports [0]. I also make use of fairly recent SQL features introduced in Postgres 9.4, but I'm not sure if there are major issues with compatibility.

[0] https://github.com/cockroachdb/cockroach/issues/2115

12
nik736 13 hours ago 3 replies      
What advantages do I have using Cockroach compared to Postgres, Cassandra, Rethink or MongoDB? (I know that all of them are completely different, that's part of the question)
13
therealmarv 12 hours ago 3 replies      
Does this work theoretically interplanetary (just asking because for science) ?
14
ericb 13 hours ago 2 replies      
Can Cockroach be plugged into a Rails app where mysql was?

I'd be interested in hearing:

- the backup story

- the replication/failover story

- horizontal scaling story (is it plug and play)

15
misterbowfinger 13 hours ago 1 reply      
Can someone give a brief pros/cons between Cockroach DB Core and Google Cloud Spanner?
16
apognu 9 hours ago 1 reply      
I've been following CockroachDB for quite a while. Great job on 1.0.

I've had a question for quite some time though (and I think there is an RFC for it on GitHub): do we still need to have a "seed node" that is run without the --join parameter, or can we run all the nodes with the same command line, with the cluster waiting for quorum to reconcile on its own?

17
v3ss0n 10 hours ago 2 replies      
Will there be a rethinkdb style REALTIME Changefeed or PostgreSQL's Listen Notify ?
18
Gurrewe 13 hours ago 1 reply      
Congratulations to the team on the relase!

Everything under "The Future" really excites me, especially the geo-partitioning features. That is something that I'm really looking forward to be using!

19
gog 10 hours ago 2 replies      
Slightly offtopic, but what do you use for your blog and documentation pages?
20
nathell 13 hours ago 1 reply      
I read the announcement, got all excited, then clicked "What's inside CockroachDB Core?" and got rewarded with a 404. Ouch! This itches.
21
MichaelBurge 12 hours ago 1 reply      
It probably scales but how is the performance? If I need to load a couple billion rows and do a dozen joins in some analytics, is that one machine, a dozen, or 100?

Is it more for web apps, analytics, or what? When would I consider switching from e.g. Postgres to CockroachDB?

22
bfrog 12 hours ago 0 replies      
Should've gone with tardigrade instead as a name, those little bastards can live in space!
23
v3ss0n 10 hours ago 1 reply      
Congrats Ben Darnell and team! I am fan of his work on Tornado web server!
24
singularjon 10 hours ago 0 replies      
How does the speed compare to that of Postgresql and MongoDB?
25
raarts 6 hours ago 1 reply      
On a three node cluster will it survive two nodes going down?
26
acd 8 hours ago 0 replies      
Congrats to bringing out 1.0 bern following the project and look forward to try it out!
27
amq 13 hours ago 1 reply      
Can someone explain how is/can it be better than MariaDB Galera or MySQL Group Replication?
28
xmichael99 4 hours ago 1 reply      
Now if we could get a 1.0 of TiDB ???
29
brightball 13 hours ago 1 reply      
How does it compare to Couchbase with it N1QL?
30
wtf_is_up 12 hours ago 1 reply      
Does CockroachDB have a streaming API a la RethinkDB changefeeds? This is a killer feature, IMO.
31
api 12 hours ago 1 reply      
About nine months ago we made the decision to go with RethinkDB for our infrastructure in place of PostgreSQL (at least for live replicated data), but if this existed at the time we'd have seriously taken a look. We're pretty happy with RethinkDB but I plan on still taking a look at this so we have a backup option.
32
newsat13 10 hours ago 7 replies      
Very disappointed with HN turning into a 4chan/reddit style trolling board about the name. Guys, we get it that you don't like the name. Can we please stop bike shedding and move on? The people at cockroachdb have obviously seen all your messages but decided it's worth keeping the name. What more is there to talk about? Why not talk about the relative technical merits of this DB?
33
anthonylebrun 13 hours ago 5 replies      
Since there's a little side riff about the name going on I thought I'd throw in my 2 cents. Personally I love the name. I think it does a great job of conveying the spirit of the project and provides unlimited pun opportunities. Plus it's memorable, just like a real life roach encounter. Unfortunately I'm sure some people will discriminate against your DB on the basis of name alone. That's ludicrous, but that's our species for ya.
34
johnwheeler 13 hours ago 11 replies      
I think the name "Cockroach" was a really poor decision from a marketing standpoint. The team intended to convey durability, since cockroaches can live through anything. But when I think of a cockroach, I think, gross, disgusting, etc.
35
sandstrom 9 hours ago 0 replies      
I think it's an excellent name!

Also, biologists would argue that cockroaches is a magnificent creature, highly adaptable and very fit (in 'survival of the fittest' terms).

I would pay for and deploy a cockroach db because of its name.

36
ccallebs 13 hours ago 2 replies      
First, this is awesome! Congrats to the team for reaching this milestone.

Secondly, I think the name is memorable and conveys exactly what it should. If I were ever on an engineering team that chose not to use CockroachDB due to being "grossed out" by the name, I wouldn't be on that engineering team for long. Perhaps someone can explain the knee-jerk reaction to it for me.

37
triangleman 12 hours ago 0 replies      
Name doesn't bother me. It's memorable and I'd definitely consider using it, whether in a startup or enterprise. Better than "Postgres" -- how do you even pronounce that?
38
cwisecarver 13 hours ago 8 replies      
Cue the comments stating that no one will use this because the name is bad.
39
socmag 13 hours ago 3 replies      
Clocks are meaningless under load.

The higher frequency the transactions the more you get into quantum physics.

In reality, nobody cares if T-Mobile debited your account 0.01ms before WalMart.

[edit] what is important is isolation and consistency of the transactons.

40
deferredposts 12 hours ago 1 reply      
In a couple of years, I suspect that they will rebrand their name to just "RoachDB". It conveys the same meaning, while not being that awkward to discuss with users/clients
41
Perignon 13 hours ago 0 replies      
Name still sucks and is disgusting af.
42
niceperson 13 hours ago 1 reply      
>Cockroach

What were they thinking?

43
whatnotests 9 hours ago 1 reply      
/me forks the damned repo, renames it, wins the Internet.
5
CPU Utilization is Wrong brendangregg.com
589 points by dmit  1 day ago   88 comments top 34
1
faragon 1 day ago 3 replies      
I respect Brendan, and although it is an interesting article, I have to disagree with him: The OS tells you about OS CPU utilization, not CPU micro-architecture functional unit utilization. So if the OS uses a CPU for running code until a physical interrupt or a software trap happens, in that period the CPU has been doing work. Unless the CPU could be able to do a "free" context switch to a cached area not having to wait for e.g. a cache miss (hint: SMT/"hyperthreading" was invented exactly for that use case), the CPU would be actually busy.

If in the future (TM) using CPU performance counters for every process becomes really "free" (as in "gratis" or "cheap"), the OS could report bad performing processes because the reasons exposed in the article (low IPC indicating poor memory access patterns, unoptimized code, code using too small buffers for I/O -causing system performance degradation because excessive kernel processing time because-, etc.), showing the user that despite having high CPU usage, the CPU is not getting enough work done (in that sense I could agree with the article).

2
glangdale 1 day ago 2 replies      
The problem is that IPC is also a crude metric. Even leaving aside fundamental algorithmic differences, an implementation of some algorithm with IPC of 0.5 is not necessarily faster than an implementation that somehow manages to hit every execution port and deliver an IPC of 4.

I can improve IPC of almost any algorithm (assuming it is not already very high) by slipping lots of useless or nearly useless cheap integer operations into the code.

People always tell you "branch misses are bad" and "cache misses are bad". You should always ask: "compared to what"? If it was going to take you 20 cycles worth of frenzied, 4 instructions per clock, work to calculate something you could keep in a big table in L2 (assuming that you aren't contending for it) you might be better off eating the cache miss.

Similarly you could "improve" your IPC by avoiding branch misses (assuming no side effects) by calculating both sides of a unpredictable branch and using CMOV. This will save you branch misses and increase your IPC, but it may not improve the speed of your code (if the cost of the work is bigger than the cost of the branch misses).

3
dekhn 1 day ago 1 reply      
IPC is amazing. We had some "slow" code, did a little profiling, and found that a hash lookup function was showing very low IPC about half the time. Turns out, the hash table was mapped across two memory domains on the server (NUMA) and the memory lookup from one processor the other processors memory was significantly slower.

perf on a binary that is properly instrumented (so it can show you per-source-line or per-instruction data) is really ghreat.

4
inetknght 1 day ago 2 replies      
I use `htop` for all of my Linux machines. It's great software. But one of my biggest gripes is that "Detailed CPU Time" (F2 -> Display options -> Detailed CPU time) is not enabled by default.

Enabling it allows you to see a clearer picture of not just stalls but also CPU steal from "noisy neighbors" -- guests also assigned to the same host.

I've seen CPU steal cause kernel warnings of "soft-lockups". I've also seen zombie processes occur. I suspect they're related but it's only anecdotal: I'm not sure how to investigate.

It's pretty amazing what kind of patterns you can identify when you've got stuff like that running. Machine seems to be non-responsive? Open up htop, see lots of grey... okay so since all data is on the network, that means that it's a data bottleneck; over the network means it could be bottlenecked at network bandwidth or the back-end SAN could be bottlenecked.

Fun fact: Windows Server doesn't like not having its disk IO not be serviced for minutes at a time. That's not a fun way to have another team come over and get angry because you're bluescreening their production boxes.

5
nimos 1 day ago 2 replies      
Perf is fascinating to dive into. If you are using C and gcc you can use record/report that show you line by line and instruction by instruction where you are getting slowdowns.

One of my favorite school assignments was we were given an intentionally bad implementation of the Game of Life compiled with -O3 and trying to get it to run faster without changing compiler flags. It's sort of mind boggling how fast computers can do stuff if you can reduce the problem to fixed stride for loops over arrays that can be fully pipelined.

6
exabrial 13 hours ago 0 replies      
Your CPU will execute a program just as fast at 5% than as 75%.

We honestly need a tool that compares I/O, memory fetch, cache-miss, TLB misses, page-outs, CPU Usage, interrupts, context-swaps, etc all in one place.

7
prestonbriggs 1 day ago 0 replies      
At Tera, we were able to issue 1 instruction/cycle/CPU. The hardware could measure the number of missed opportunities (we called them phantoms) over a period of time, so we could report percent utilization accurately. Indeed, we could graph it over time and map periods of high/low utilization back to points in the code (typically parallel/serial loops), with notes about what the compiler thought was going on. It was a pretty useful arrangement.
8
alain94040 13 hours ago 1 reply      
The article is interesting, but IPC is the wrong metric to focus on. Frankly, the only thing we should care about when it comes to performance is time to finish a task. It doesn't matter if it takes more instructions to compute something, as long as it's done faster.

The other metric you can mix with execution time is energy efficiency. That's about it. IPC is not a very good proxy. Fun to look at, but likely to be highly misleading.

9
heinrichhartman 18 hours ago 0 replies      
It seems to me that the CPU utlization metric (from /proc/stat) has far more problems than misreporting memory stalls.

As far as I understand it, the metric works as follows: At every clock interrupt (every 4ms on my machine) the system checks which process is currently running, before invoking the scheduler:- If the idle process idle time is accounted.- Otherwise the processer is regarded as utilized.

(This is what I got from reading the docs, and digging into the source code. I am not 100% confident I understand this completely at this point. If you know better please tell me!)

There are many problems with this approach:Every time slice (4ms) is accounte either as completely utilized on completely free. There are many reasons for processes going on CPU or off CPU outside of clock interrupts. Blocking syscalls are the most obvious one.In the end a time slice might be utilized by multiple different processes and interrupt handlers but if at the very end of the time slice the idle thread is scheduled on CPU the whole slice is counted as idle time!

See also:https://github.com/torvalds/linux/blob/master/Documentation/...

10
jarpineh 15 hours ago 0 replies      
By clicking through some links on the article I found this:http://www.brendangregg.com/blog/2014-10-31/cpi-flame-graphs...

Now I wonder how easy and manual work it would be to do these combined flamegraphs with CPI/IPC information? My cursory search didn't find nary a mention after 2015... Perhaps this is still hard and complicated.

To me it seems really useful to know why a function takes so long to work (waiting or calculating) and not "merely" how long it takes. Even if the information is not perfectly reliable nor can't be measured without effect on execution.

11
deathanatos 1 day ago 0 replies      
There's also loadavg. I've encountered a lot of people who think that a high loadavg MUST imply a lot of CPU use. Not on Linux, at least:

> The first three fields in this file are load average figures giving the number of jobs in the run queue (state R) or waiting for disk I/O (state D) averaged over 1, 5, and 15 minutes.

Nobody knows about the "or waiting for disk I/O (state D)" bit. So a bunch of processes doing disk I/O can cause loadavg spikes, but there can still be plenty of spare CPU.

12
deegu 10 hours ago 0 replies      
CPU frequency scaling can also lead to somewhat unintuitive results. On few occasions I've seen CPU load % increasing significantly after code was optimized. Optimization was still actually valid, and the actual executed instructions per work item went down, but the CPU load % went up since OS decided to clock down the CPU due to reduced workload.
13
joosters 15 hours ago 0 replies      
I can't see a mention of it here, or on the original page, so IMO it's worth pointing out a utility that you will most likely already have installed on your Linux machine: vmstat. Just run:

 vmstat 3
And you'll get a running breakdown of CPU usage (split into user/system), and a breakdown of 'idle' time (split into actual idle time and time waiting for I/O (or some kinds of locks).

The '3' in the command line is just how long the stats are averaged over, I'd recommend using 3+ to average out bursts of activity on a fairly steady-state system.

14
westurner 23 hours ago 1 reply      
Instructions per cycle: https://en.wikipedia.org/wiki/Instructions_per_cycle

What does IPC tell me about where my code could/should be async so that it's not stalled waiting for IO? Is combined IO rate a useful metric for this?

There's an interesting "Cost per GFLOPs" table here:https://en.wikipedia.org/wiki/FLOPS

Btw these are great, thanks: http://www.brendangregg.com/linuxperf.html

( I still couldn't fill this out if I tried: http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools... )

15
surki 21 hours ago 0 replies      
Another related tool I found interesting: perf c2c

This will let us find the false sharing cost (cache contention etc).

https://joemario.github.io/blog/2016/09/01/c2c-blog/

16
jeevand 1 day ago 1 reply      
Interestingly IPCs are also used to verify new chipsets in embedded companies. Run the same code with newer generation chipset and see if IPC is better than the previous. IPCs are one of the main criteria if the new chipset is a hit or miss (others are power..)
17
toast0 1 day ago 0 replies      
CPU util might be misleading, but cpu idle under a threshold at peak [1] means you need more idle cpu and you can get that by getting more machines, getting better machines, or getting better code.

Only when I'm trying to get better code, do I need to care about IPC, and cache stalls. I may also want better code to improve the overall speed of execution, too.

[1] (~50% if you have a pair of redundant machines and load scales nicely, maybe 20% idle or even less if you have a large number of redundant machines and the load balances easily over them)

18
xroche 23 hours ago 1 reply      
> You can figure out what %CPU really means by using additional metrics, including instructions per cycle (IPC)

Correct me if I am wrong, but this won't work for spinlocks in busy loops: you do have a lot of instructions being executed, but the whole point of the loop is to wait for the cache to synchronize, and as such, this should be taken as "stalled".

19
gpvos 1 day ago 0 replies      
The server seems overloaded (somewhat ironically). Try http://archive.is/stDR0 .
20
kazinator 23 hours ago 1 reply      
This article is not as silly as it could be.

Let me help.

Look, CPU utilization is misleading. Did you forget to use -O2 when compiling your code? Oops, CPU utilization is now including all sorts of wasteful instructions that don't make forward progress, including pointless moves of dead data into registers.

Are you using Python or Perl? CPU utilization is misleading; it's counting all that time spent on bookkeeping code in the interpreter, not actually performing your logic.

CPU utilization also measures all that waste when nothing is happening, when arguments are being prepared for a library function. Your program has already stalled, but the library function hasn't started executing yet for the silly reason that the arguments aren't ready because the CPU is fumbling around with them.

Boy, what a useless measure.

21
glandium 1 day ago 1 reply      
I didn't know about tiptop, and it sounds interesting. Running it, though, it only shows "?" in Ncycle, Minstr, IPC, %MISS, %BMIS and %BUS colums for a lot of processes, including for, but not limited to, Firefox.
22
alkonaut 20 hours ago 0 replies      
Is there any easy way to do profiling that reveals stalled cpu becasue of pointer chasing, for "high level devs" on windows?
23
heisenbit 19 hours ago 0 replies      
Any way to do something equivalent on OSX?
24
taeric 1 day ago 1 reply      
This is silly. The conceit that ipc is a simplification for "higher is better" is exactly the problem he has with utilization.

True, but useful? Most of us are busy trying to get writes across a networked service. Indeed, getting to 50% utilization is often a dangerous place.

For reference, running your car by focusing on rpm of the engine is silly. But, it is a very good proxy and even more silly to try and avoid it. Only if you are seriously instrumented is this a valid path. And getting that instrumented is not cheap or free.

25
buster 21 hours ago 0 replies      
This was very enlightening. I have the highest respect for Brendan and his insights, i must say.
26
JohnLeTigre 1 day ago 0 replies      
or your code could be riddled with thread contentions

I guess this is why he used the term likely

Interesting article though

27
valarauca1 1 day ago 0 replies      
We are what we measure.

Very true that 100% CPU Utilization is often waiting on bus traffic (loading caches, loading ram, loading instructions, decoding instructions) only rarely is the CPU _doing_ useful work.

The context of what you are measuring depends if this is useful work or not. The initial access of a buffer almost universally stalls (unless you prefetched 100+ instructions ago). But starting to stream this data into L1 is useful work.

Aiming for 100%+ IPC is _beyond_ difficult even for simple algorithms and critical hot path functions. You not only require assembler cooperation (to assure decoder alignment), but you need to know _what_ processor you are running on to know the constraints of its decoder, uOP cache, and uOP cache alignment.

---

Perf gives you ability to cache per PID counters. Generally just look at Cycles Passed vs Instructions decoded.

This gives you a general overview of stalls. Once you dig into IPC, front end stalls, back end stalls. You start to see the turtles.

28
willvarfar 1 day ago 0 replies      
Using IPC as a proxy for utilization is tricky because an out-of-order machine can only get that max IPC if the instructions it is executing are not dependent on not-yet-computed instructions.

In-order CPUs are much easier to reason about; you can literally count the stalled cycles.

29
spullara 23 hours ago 0 replies      
Need a new metric "CPU efficiency".
30
jwatte 1 day ago 1 reply      
I think thinking about the CPU add mainly the ALU seems myopic.The job of the CPU is to get data into the right pipeline at the right time. Waiting for a cache miss means it's busy doing its job. Thus, CPU busy is a reasonable metric the way it is currently defines and measured. (After all, the memory controller is part of the CPU these days.)
31
nhumrich 1 day ago 1 reply      
Totally disagree with the premise of the article. Every metric tool that i know of that shows cpu utilization correctly shows cpu work. Load on the other hand represents cpu and iowait (overall system throughput). Io wait is also exposed in top as the "wait" metric. An amazon EC2 box can very easily get to load(5) = 10 (anything above 1 is considered bad), but the cpu utilization metric will still show almost no cpu util.
32
flamedoge 1 day ago 2 replies      
> If your IPC is < 1.0, you are likely memory stalled,

depends on the workload.

33
gens 1 day ago 1 reply      
The core waiting for data to be loaded from RAM is busy. Busy waiting for data.

Instructions per cycle can also be misleading. Modern cpu's can do multiple shifts per cycle, but something like division takes a long time.

It all doesn't matter anyway, as instructions per cycle does not tell you anything specific. Use the cpu-builtin performance counters, use perf. It basically works by sampling every once in a while. It (perf, or any other tool that uses performance counters) shows you exactly what instructions are taking up your processes time. (hint: it's usually the ones that read data from memory; so be nice to your caches)

It's not rocket surgery.

34
tjoff 1 day ago 1 reply      
Well, this is the reason I hate HyperThreading, does your app consume 50% or 100% - with hyperthreading you have no clue.

And that is per core, it becomes increasingly meaningless on a dualcore and on a quadcore and above you might as well replace it with MS Clippy.

And this is before discussing what that percentage really means.

edit: I'm interpreting the downvotes that people are in denial about this ;)

6
Solar Roof tesla.com
499 points by runesoerensen  9 hours ago   352 comments top 51
1
stratigos 3 hours ago 12 replies      
People are thinking way too much about how much this saves them at a personal level.

I think people should instead be thinking about how we can save the existence of the entire species, and all other higher order forms of life on earth, rather than focusing on their individual tax breaks, savings, or other trivial concerns. Yes, your cash flow is rendered quite trivial if life on Earth ends.

Invest in the Life Economy, and turn your back on the Death Economy. The value here is in the benefit to life, concern over state monopolized currencies clearly facilitates an economy of death.

2
blakesterz 7 hours ago 17 replies      
Yikes. I just signed a contract for a new roof here last month, it's going to cost about $12k. Just did the estimate for the Tesla Solar roof... $80,300, so $87k if I want the battery too. I can barely afford the $12 right now, the $80 is just so far over it's not even close, even with how much I save over the years in electricity.

That being said, I love these things, so hoping it gets cheaper in the coming years.

3
mrtron 6 hours ago 2 replies      
Not sure why all the negative energy.

They are going after the portion of the market that would replace their roof with a high end material, and are interested in solar.

If you are a home owner in this situation, you could consider investing into your home. The roof will pay dividends over the next 30 years, and is attractive and durable.

I think it will do extremely well. Perhaps the best opportunity is in new construction. Imagine having 50k more baked into your mortgage, but having your roof lower your ongoing energy costs! Great potential in that market, could also optimize the roof designs for power generation.

4
IvanK_net 6 hours ago 14 replies      
I have always been a huge fan of a quick transition to sustainable energy sources. There is just one little thing I don't understand.

Why they expect people to make electricity at their homes? You can buy a little piece of land in a dessert, put solar panels there and distribute the electricity to other places. And you don't have to climb on any roof during the installation or the maintenance.

It is not profitable today in a free market to bake your own bread or to plant your own vegetables. Because if it is done in a large scale by professionals, it can be made much cheaper while keeping the good quality. So I don't understand, how the home-made electricity could economically compete with the professional energy farms of the future.

5
SirLJ 7 hours ago 4 replies      
Tesla acquired SolarCity in November in a deal worth $2.1 billion.

At the event, Musk said Tesla's roof would price competitively with normal roofs and could even cost less.

"It's looking quite promising that a solar roof will actually cost less than a normal roof before you even take the value of electricity into account," Musk said at the event. "So the basic proposition would be: Would you like a roof that looks better than a normal roof, lasts twice as long, costs less, and, by the way, generates electricity? It's like, why would you get anything else?"

6
11thEarlOfMar 5 hours ago 2 replies      
In modeling whether this makes sense, I looked at my annual electricity bill, which comes in at about $1,800/year. That's not enough savings opportunity to justify a ~$70,000 roof+batteries.

However, when I add 2 electric cars, the savings nearly triple [0]. Instead of buying gasoline, I'll be paying for electricity.

At $5,400/year, spending $70,000 starts to make some sense.

On the other hand, if I put up ugly panels and still use the Tesla batteries, aren't I going to save a lot more?

[0] 24,000 miles/year, 225 miles @ $10 per charge, vs. 25 mpg @$3/gal

EDIT: Corrected KWh charge... $10 is cost for one charge.

7
fernly 2 hours ago 2 replies      
For a counterweight let me present this interview[0] with the CEO of "the largest privately held solar contracting company in America", near the end of which he says several disparaging things about Tesla's roof, including,

> When I saw the demo he did at Universal Studios... What I saw was a piece of glass that looked like it had a cell in it. The challenges hes going to have is, how are you going to wire it? Every one of those shingles has to be wired.

> Roofs have valleys and they have hips and they have pipes. How are you going to work around that? How are you going to cut that glass? Are you going to cut right through the cell?

The latter question is perhaps answered by the posted article, "Solar Roof uses two types of tilessolar and non-solar." So Petersen's question is moot, the glass/solar tiles don't have to be cut to fit in a hip or around a flue, that will be done to the non-solar tiles that look the same.

The question of wiring is open: imagine the grid of wires that have to underly that roof, and getting them all put down without a break or a short, by big guys with nail guns (if you've ever watched roofers at work -- it isn't a precision operation).

Then Petersen goes on to say,

> So I would say for the record ... itll be cost-prohibitive. ... For $55,000 I can give you a brand-new roof that will last forever 50 years and I can give you all the solar you can handle. ... (Musks) product is going to be north of $100,000.

The graph in the posted article does not directly address total up-front installed cost, but rather tries to combine cost with some anticipated lifetime energy return -- a procedure with a LOT of variables and assumptions. I would like to see real numbers for a Tesla roof, $/sq.yd installed.

[0] http://www.mercurynews.com/2017/05/04/from-summer-job-to-sol...

8
palakchokshi 6 hours ago 5 replies      
2 self driving Teslas in the garage (making money when not used) $150,000

1 power wall battery pack $7,000

1 Solar Roof $80,000

Subtract

$15,000 in Federal tax credits for both cars

$5,000 in California tax credits for both cars

30% of 80,000 = $24,000 Solar Investment tax credit

$237,000 - $44,000

Grand Total $193,000

Calculate savings

$240 per month in gas

$100-$300 per month in electricity

$1000 - $2000 earned by the cars while not used by owner (10 years into the future)

$1340 to $2540 per month

$193,000/1340 = 144 months = 12 years to recover costs

$193,000/2540 = 76 months = 6.33 years to recover costs

Take away the income from the cars

$193,000/540 = 357 months = 30 years to recover costs

If PGE gives you money for putting excess electricity into the grid then you can recover costs faster.

9
quizme2000 6 hours ago 1 reply      
I think Elon got ripped off on his last shingle roof. The bar chart is nice but off by at least 150%. I've had many roofing subcontractors as clients past and present in Northern California. Based on an average of 870 roofs in 2016 for Single Family Residential homes in the bay area, Asphalt shingle roofs are $3.12 per square foot for materials and labor. The highest was $5.75 psf and the lowest $2.35 psf. Note that the SF bay area is considered one of the most expensive in roofing market. Also note that Solar City has a poor reputation in the industry for hard selling larger than needed residential solar systems.
10
myrandomcomment 2 hours ago 0 replies      
I need to replace my roof this year / next year. Cost ~15-20k for normal roofing, up to 50k for metal. I want solar on top of that and backup power. Just put my money down for this. Cost is under the 50k I was thinking about just for the metal roof!

Time will tell when they come out and do the survey to see how correct it is but I am excited.

11
MR4D 1 hour ago 0 replies      
Tesla has an error somewhere. I checked their website calculator vs their source of Google project sunroof, and Tesla thinks my roof is 5 times bigger, with an electric bill nearly twice as high!

Doing some quick math, I can confirm that Google's number are reasonably close on both, while Tesla's are just plain wrong.

The result is a Tesla roof that would cost roughly $170,000. Worse, that's about half the value of my house!

I know - early days - but, wow, surprised by the estimate!

12
Matt3o12_ 6 hours ago 3 replies      
Does anyone understand how the warranty works? From their solar panel page[0], it says that there is a 30 year warranty for Power and Weatherization and a lifetime warranty.

So, what does the lifetime cover? The only thing that can go wrong is that either the power module fails or the tile is damaged due to weather, which is both covered by the 30 year warranty.

Nevertheless, a 30 year warranty is still pretty impressive and even more so if it covers normal wear and tear from weather.

[0]: https://www.tesla.com/solarroof?redirect=no

13
foxylad 3 hours ago 0 replies      
Does anyone know how the electrical connection works?

It seems to me that this is critical. If connections fail in a really hostile environment (high thermal range and moisture levels) then maintenance will kill any savings.

But if they've solved this problem, (and perhaps have an efficient way to replace tiles without removing the ones above), then I'd guess they will be wildly successful.

I once visited my brother who was having a new slate roof installed. While inspecting it, he saw a cracked slate on the bottom row. He insisted it be replaced, which meant removing an ever-increasing triangle of tiles above it, until you reached the ridge. The contractor did not have a good day.

14
nsxwolf 7 hours ago 3 replies      
Oh wow. So that's disappointing. I was under the impression it was about the same cost as a new roof. I guess its starting at that cost, if you want just a tiny little bit of electricity.
15
zensavona 3 hours ago 0 replies      
I understand where these people who are saying it sucks and it's too expensive are coming from. It is more expensive than normal solar panels.

BUT! How many wealthy people have beautiful houses that don't have solar panels? Why do you think that is?

Tesla has this cool factor that didn't exist for environmentally friendly things before. How many super rich people drove electric cars or hybrids before? Now Teslas are one of the cool things to have.

They are absolutely targeting a different segment of the population, but I think overall it's a very positive thing and it'll probably work.

16
Arcsech 5 hours ago 4 replies      
I'm curious about the durability - I live Colorado Springs, which is typically very sunny (good for solar), but can get pretty bad hailstorms. This means that the average roof lifetime here is much shorter than elsewhere. If the Tesla's roof tiles are actually significantly more durable than asphalt, it could be more cost-effective here than elsewhere.
17
awqrre 2 hours ago 0 replies      
At my house, it would take more then 20 years to use up $53,500 worth of electricity assuming that the panels would be able to generate all the electricity that I need (and it probably would not be able to because my roof is not in the perfect angle). I probably will have to stick to a conventional roof.
18
fpgaminer 6 hours ago 2 replies      
I'm so absolutely excited for solar power. Tesla's Solar Roof, their PowerWall batteries, electric cars. It's all just painting such a bright future. Certainly Tesla has no monopoly on it, but they've made it sexy and are pushing the bleeding edge forward. Props to them.

We recently signed a contract to do an installation on our house (with a local contractor, not SolarCity). It can't happen soon enough! We'll have enough panels and batteries to be 100% off-grid throughout the entire year, plus get a good chunk of change back from the Net Metering every year. Pay off is only 8 years!

That installation is enough to cover our normal electric usage. Longer term I want to replace our gas appliances with electric and replace the car with a Tesla. Then we can double our solar installation to keep pace and BAM we will be 100% clean energy and off-grid. All while saving a bucket of money.

The thought of running off grid in the middle of a Southern California suburb? People might think me crazy, but guess what? At least we're doing our small part to save the planet, and saving money doing it. So who's the crazy one?

19
SwellJoe 6 hours ago 2 replies      
I'm super excited about all of the great stuff happening in solar recently, but whenever I read about the economics of home solar, I'm also always reminded of how stacked the deck is for wealthy people vs. poor folks. There's a very large federal tax credit for solar investment. That's great...but, people who can't afford their own home get no such credit, and there's no way for them to get such a credit. That's a super common trait for lot of incentives; they go to people who need them least. And, the people who are getting these incentives, are also using a lot more power (bigger houses, more power), and so even with solar, their huge houses may still be contributing more to emissions than the poor folks who aren't getting any tax breaks living in apartments or rental properties.

I don't really have any answers on this, I just think it doesn't get talked about enough.

20
tankerdude 2 hours ago 0 replies      
Hrmmm... I just signed a contract a few weeks ago with SunPower solar panels. Over 40K for almost 8kWh of electricity. It's a hefty cost but it's ready now.

It's a nearly flush mount so I'm ok with it and it's still a traditional look in the front where it's concrete tile that would last longer than I will live.

Lastly though, I wonder how they deal with valleys and different roof pitches. It would look a little odd unless it is non functional.

21
DLarsen 7 hours ago 0 replies      
"...the glass itself will come with a warranty for the lifetime of your house, or infinity, whichever comes first."

Cute.

22
yeukhon 1 hour ago 0 replies      
> Installations will start in June, beginning with California

Would be interested in NYC.

So a couple questions for those already done this:

* any tax benefit / government subsidies I should be aware of?

* starting small, recommendation? pricing?

23
bootload 3 hours ago 0 replies      
"Solar Roof uses two types of tilessolar and non-solar. Looking at the roof from street level, the tiles look the same. Customers can select how many solar tiles they need based on their homes electricity consumption."

Game changer for suburban housing. This will accelerate the decentralisation of power generation making it less likely power failure will occur. Now for housing regulations at state and municipal level to mandate solar tiles in construction.

24
qaq 5 hours ago 0 replies      
To all upset about pricing there are products targeting different income brackets. People in blah also can't understand how we spend half of their monthly income on some organic blah drink. Just because it does not make sense for your particular situation does not mean there is no market.
25
gigatexal 44 minutes ago 0 replies      
The infinity warranty is a draw for me.
26
lostgame 2 hours ago 0 replies      
I see a lot of these personal level-vs-global level discussions here but ultimately not enough posts celebrating the fact that we're looking at both, here, and looking at a potentially much better future because of it.

Go, Tesla.

27
bikamonki 2 hours ago 0 replies      
Sit down and dig this: in my country the State owns sunshine. Yep. They even made sure it was included in the last Constitution. So, if this tech ever becomes cheap enough for the masses, government will be ready to tax it.
28
accountyaccount 7 hours ago 0 replies      
Looks like these roofs take about 30 years to pay for themselves?
29
woodandsteel 4 hours ago 0 replies      
Interesting the house in the picture has a chimney and a highly slanted roof. It looks like it is in the north with lots of snow and relatively little sunlight
30
dustinmoorenet 7 hours ago 1 reply      
With Space X internet on the horizon, I need to start designing my house in the country, preferably with a small roof.
31
dynofuz 3 hours ago 1 reply      
The hail ball test is deceptive. The tesla tile is held with more support since its horizontal. the max distance to any corner support is maybe 2-3 inches. the other natural tiles are vertical, and therefore have 4-5 inches to the farthest supported corners. It may still work, but we cant tell from that video.
32
jorblumesea 6 hours ago 0 replies      
While it's far better than other competitors, my asphalt roof was 7k, 30 year warranty rated to 120mph winds. Still a long way to go on the pricing part.

Super happy this is even a thing, 10 years ago this would have seemed like science fiction.

33
waynecochran 3 hours ago 0 replies      
Google's solar saving estimator is pretty cool -- I punch in my house address and it examines my roof line and notes how southernly each part of your roof lies.

Unfortunately, it says my cost would go up $53 a month! No point in getting solar if you live in Portland.

34
canterburry 6 hours ago 0 replies      
Yeah, I feel the coverage of this has been very deceptive. We just got a quote for our roof in SF (small house) and it was ~$20K with an upgraded architectural tile, new spouts and gutters.

Tesla would charge me $67K for the roof alone based on roof size and our energy use.

1. "Unlimited warranty" doesn't actually mean unlimited warranty when your roof starts leaking...just that the tile won't break.

2. Why the heck should I pre-pay Tesla for unrealized savings to my future energy use??

35
mrfusion 7 hours ago 1 reply      
how these shingles connect to each other? And how are they affixed to the roof? You can't just nail them right?
36
gumby 3 hours ago 0 replies      
FYI LCoE calculation on Si PV assume a reduction of at least 50% of generating capacity within 20 years. This page just claims "30 years" which is outside the expected lifetime of any cells on the market today.
37
ctdonath 6 hours ago 2 replies      
People keep overlooking the objective value of not relying on "grid" power sources. Power goes off, your system keeps going. Gasoline supply stops (I've seen that a few times), you can just power your car at home. Your system fails, grid is likely still up to cover.

Supply-and-demand takes a sharp turn when supply is actually limited and can/does run out. At that point, having pre-paid for your own uninterrupted off-grid supply is worth a whole lot more.

38
owenversteeg 4 hours ago 3 replies      
Holy shit, these comments.

"My cars will make $24,000 per year" (in a comment justifying spending a quarter million dollars on Tesla products)

"People don't understand how we can spend half their monthly income on one organic drink" (in a completely unrelated comment)

"if I buy two Teslas my savings triple!" (another person justifying Tesla's marketing with some creative math)

Sounds like something you'd hear from protesters mocking the 1% but no just another day here on HN.

I've been here (in various incarnations) long enough to say this, so could we try to be just a little bit more self aware? 24k/yr is nearly double the minimum wage. A quarter million dollars is a truly immense amount of money. And buying two cars is a dream for most of America, ignoring the fact that those are two Teslas, which are roughly $70k cars (and no, you can't currently buy any $35k Teslas no matter what Musk's Twitter says.)

39
rurabe 5 hours ago 1 reply      
Best part

"Tile warranty: Infinity, or the lifetime of your house, whichever comes first"

40
ed_balls 4 hours ago 0 replies      
Apart from people buying these for status there is another market - Hawaii and other remote places where prices are from 2x to 5x comparing to California.
41
myroon5 6 hours ago 1 reply      
I'm surprised they didn't team up more with Google's Project Sunroof or Zillow or create their own version of those projects, so that you could just put in your home address and get all the relevant details. Had to check Zillow to find out my own square footage.
42
brianbreslin 6 hours ago 1 reply      
Any idea on whether or not this would improve your home's resale value? Also it won't let you go a full 100% (max 70%) of your roof coverage. Do they fill the rest with regular tiles?

The average person would have to finance this. So what's the true cost?

43
altano 5 hours ago 0 replies      
"Your Solar Roof can generate $123,900 of energy over 30 years."

Why doesn't the calculator tell me the estimated kWh production instead of a dollar figure that means nothing to me?

44
amelius 6 hours ago 1 reply      
Why don't banks (or even Tesla for that matter) finance such roofs upfront? It seems they can make money out of this.
45
dharma1 7 hours ago 1 reply      
UK availability?
46
dEnigma 5 hours ago 0 replies      
"In doing our own research on the roofing industry, it became clear that roofing costs vary widely, and that buying a roof is often a worse experience than buying a car through a dealership."

Seems like someone just couldn't resist putting that little jab in there.

47
abc_lisper 6 hours ago 1 reply      
Where is the finance option?
48
pfarnsworth 4 hours ago 0 replies      
I am currently waiting to have my roof redone in the next couple of weeks It's going to cost $20k. I went through the calculator and it said that my roof would be about $30k after rebates, with no battery. That, to be honest, is something I wish I had known before I signed the contract to get my roof done. I don't, however, use much electricity. I use about $70/month max for my entire house, so I would literally have to convert everything over to electricity in order for this to be more worthwhile. But at this point, there's no incentive for me to ever get the solar roof unfortunately, having JUST dumped $20k into my shingle roof.
49
EGreg 4 hours ago 0 replies      
I love their graph, with the solar roof being the only "negative-cost" roof. Really drives the (sales) point home.
50
OrthoMetaPara 3 hours ago 1 reply      
This is the dumbest thing ever. If you live in a city or a suburb, you don't need one of these things because you'll be connected to a grid that can give you electricity that is far more efficiently generated. If you live in a rural area with a lot of sun, then you can just put solar panels on the ground where they're not a bitch to clean.

I'm not against using solar electricity because it can be made affordable but this idea is equivalent to the backyard blast furnaces in Maoist China. It's a waste of time and only useful for status signaling to your eco-chic friends.

51
SirLJ 7 hours ago 0 replies      
Way to expensive, as always, too much hype and then nothing...
7
Remotely Exploitable Type Confusion in Windows 8, 8.1, 10, Windows Server, etc chromium.org
589 points by runesoerensen  2 days ago   176 comments top 26
1
statictype 2 days ago 8 replies      
NScript is the component of mpengine that evaluates any filesystem or network activity that looks like JavaScript. To be clear, this is an unsandboxed and highly privileged JavaScript interpreter that is used to evaluate untrusted code, by default on all modern Windows systems. This is as surprising as it sounds.

Double You Tee Eff.

Why would mpengine ever want to evaluate javascript code coming over the network or file system? Even in a sandboxed environment?

What could they protect against by evaluating the code instead of just trying to lexically scan/parse it?

(I'm sure they had a reason - wondering what it is)

2
to3m 2 days ago 5 replies      
SourceTree is pretty much unusable on my laptop, because every time it does anything the antimalware service springs into life and uses up anything from 20%-80% of the CPU power available. I've had it take 30 seconds to revert 1 line. It's stupid.

I was very much prepared to blame Atlassian for this, but maybe I need to start thinking about blaming Microsoft instead, because it sounds like they've made a few bad decisions here.

(Still, if my options are this, or POSIX, I'll take this, thanks. Dear Antimalware Service Executable, please, take all of my CPUs; whatever SourceTree is doing, I can surely wait. Also, please feel free to continue to run fucking Javascript as administrator... I don't mind. It's a small price to pay if it means I don't have to think about EINTR or CLOEXEC.)

3
jeffy 1 day ago 0 replies      
Contents of the PoC are a ".zip" file that is actually plain-text (the engine ignores extension/mime types) and contains just this line of JS and 90kb of nonsense JS for entropy.

(new Error()).toString.call({message: 0x41414141 >> 1})

It's hard to imagine MS doesn't receive tons of watson crash reports of MsMpEng from trying to run bits of random JS. If they haven't looked at them, they probably should start now.

4
pierrec 2 days ago 0 replies      
I think this sentence sums up the severity pretty well:

The attached proof of concept demonstrates this, but please be aware that downloading it will immediately crash MsMpEng in its default configuration and possibly destabilize your system. Extra care should be taken sharing this report with other Windows users via Exchange, or web services based on IIS, and so on.

And I think the intended formulation was "care should be taken sharing this report with other Windows users or via Exchange, or web services based on IIS..." (because they're afraid it could crash the servers even if sharing between non-Windows users!)

5
e12e 1 day ago 1 reply      
Did anyone manage to figure out a simple powershell-incantation to figure out if a system is properly patched/secure?

https://technet.microsoft.com/en-us/library/security/4022344

Simply lists: "Verify that the update is installed

Customers should verify that the latest version of the Microsoft Malware Protection Engine and definition updates are being actively downloaded and installed for their Microsoft antimalware products.

For more information on how to verify the version number for the Microsoft Malware Protection Engine that your software is currently using, see the section, "Verifying Update Installation", in Microsoft Knowledge Base Article 2510781.

For affected software, verify that the Microsoft Malware Protection Engine version is 1.1.10701.0 or later."

As far as I can figure out, if:

Get-MpComputerStatus|where -Property AMEngineVersion -ge [version]1.1.10701.0|select AMEngineVersion

prints something like:

 AMEngineVersion --------------- 1.1.13704.0
according to MS one should be patched-up and good to go? (The command should print nothing on vulnerable systems).

However a hyper-vm last patched before Christmas (it's not networked), lists it's version as: 1.1.12805.0 -- which certainly seems to be a higher version than 1.1.10701.0?

I'll also note that using "[version]x.y.z.a" apparently does not force some kind of magic "version compare"-predicate, based on some simple tests.

Any powershell gurus that'd care to share a one-liner to check if one has the relevant patches installed?

Am I looking at the wrong property?

6
scarybeast 2 days ago 1 reply      
Props on the fast fix; anti-props on running an unsandboxed JavaScript engine at SYSTEM privileges and feeding it files from remote.
7
pedrow 1 day ago 3 replies      
Quick question on the timings of this. The report says that "This bug is subject to a 90 day disclosure deadline." - does that mean it was discovered 90 days ago and has been published now, or it was discovered on May 6 (as dates on the comments seem to suggest) and Microsoft has responded very quickly? In either case it seems strange not to have waited a couple more days because (for my system, anyway) I was still running the vulnerable version even after the report was made public.
8
icf80 1 day ago 0 replies      
The affected products:

Microsoft Forefront Endpoint Protection 2010

Microsoft Endpoint Protection

Microsoft Forefront Security for SharePoint Service Pack 3

Microsoft System Center Endpoint Protection

Microsoft Security Essentials

Windows Defender for Windows 7

Windows Defender for Windows 8.1

Windows Defender for Windows RT 8.1

Windows Defender for Windows 10, Windows 10 1511, Windows 10 1607, Windows Server 2016, Windows 10 1703

Windows Intune Endpoint Protection

Last version of the Microsoft Malware Protection Engine affected by this vulnerability Version 1.1.13701.0

First version of the Microsoft Malware Protection Engine with this vulnerability addressed Version 1.1.13704.0

https://technet.microsoft.com/en-us/library/security/4022344

9
arca_vorago 1 day ago 2 replies      
I'm pretty close to just saying saying I refuse to work on Windows systems anymore.
10
NKCSS 1 day ago 1 reply      
Turn of Windows Defender:

 Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender] "DisableAntiSpyware"=dword:00000001
Then reboot.

On the other hand: Microsoft has already issued a fix: https://twitter.com/msftsecresponse/status/86173436019355238...

But still; the auto-unpack of archives leaves me wanting to just disable it completely.

11
windsurfer 1 day ago 0 replies      
This also includes Windows 7 and anything running Microsoft Security Essentials, but does not include any Windows Server other than 2016.
12
jonstewart 1 day ago 0 replies      
Does MsMpEng actually do file analysis itself, unpacking, unarchiving, &c? That's the kind of stuff that should usually be sandboxed. If its zip/rar/7zip/cab/whatever support hasn't been formally verified and those components run as SYSTEM, es no bueo.
13
dboreham 2 days ago 2 replies      
It only took two days to fix this and release the patch? Impressed.
14
caf 2 days ago 1 reply      
As mpengine will unpack arbitrarily deeply nested archives...

Surely not - what happens if you feed it the zipfile quine?

15
dagaci 1 day ago 2 replies      
I am not happy that Google has published a full exploit well before it has been possible to anyone to actually deploy the patch and within just 3 days of notifying the vendor.

It seems that Google is eager for someone to use this exploit to attack as many systems as possible before they can be patched against it.

16
ezoe 1 day ago 1 reply      
So MS's anti malware software does:

1. Execute NScript, a JavaScript-like language.

2. Run as high privileged, non-sandboxed process.

3. Intercept filesystem changes and run NScript code written to anywhere, including browser cache.

4. Do not check code signing.

This is unbelievably ridiculous. It shall not happen to the software which claims to improve "security".

As I always said, there is no good anti malware software. Everything sucks.

An additional software is an additional security risk.

17
rubatuga 1 day ago 1 reply      
Congratulations Microsoft, on your best exploit yet!
18
jbergstroem 2 days ago 2 replies      
Exploitability Assessment forLatest Software Release: 2 - Exploitation Less Likely

Exploitability Assessment forOlder Software Release: 2 - Exploitation Less Likely

Anyone with ideas on how they came to this conclusion? Yes, I read the linked document but felt that the index assessment didn't really reflect that google (Natalie?) seems to have found this "in the wild".

19
btb 1 day ago 0 replies      
Good that it was fixed. But now bad actors will be looking very hard for other bugs in the unsandboxed javascript interpreter. Tempting to just disable windows defender completely.
20
polskibus 1 day ago 0 replies      
I wonder how does it affect Azure? Can such security hole affect Azure security?
21
binome 1 day ago 1 reply      
At least the good guys found this one first, and it is in Windows Defender, and the definitions should automatically update in 24hrs or less silently without a reboot.
22
ms_skunkworks 1 day ago 0 replies      
Was mpengine developed by Microsoft Research?
23
nathan_f77 1 day ago 1 reply      
This is amazing work. Does anyone know how much someone like Tavis Ormandy would be getting paid? Would it be 7 figures?
24
nthcolumn 1 day ago 0 replies      
malware injection service lol.
25
Kenji 1 day ago 1 reply      
Me, almost a year ago:

https://news.ycombinator.com/item?id=12184173

Despite getting all the downvotes, who is looking stupid now?

26
madshiva 1 day ago 2 replies      
Hey Tavis,

if you read this, could you tell to Microsoft for fix the issue with definition updates that won't remove after update? The definition kept growing and waste space. (the problem auto solve if the computer is rebooted).

Thanks :)

8
Visual Studio for Mac visualstudio.com
453 points by insulanian  13 hours ago   247 comments top 44
1
0x0 12 hours ago 5 replies      
I find the naming "Visual Studio for Mac" pretty deceptive, since apparently it is not anything like the win32 VS environment, but instead based on Xamarin Studio. Even the tagline is deceptive: "The IDE you love, now on the Mac".

I would guess this won't let you build/debug win32 or winforms or wpf applications, or install any .vsix extensions from the visual studio marketplace (of which there are lots of useful ones, such as this one to manage translations - https://marketplace.visualstudio.com/items?itemName=TomEngle... ) - correct me if I'm wrong, but if I can't install my .vsix extensions, this is not "the IDE you love, now on the Mac".

2
jot 11 hours ago 5 replies      
Almost 10 years since I exchanged emails with Steve Ballmer about this: https://medium.com/@jot/me-and-steve-ballmer-in-2007-68456a5...
3
fotbr 10 hours ago 6 replies      
Since there's a PM here from Microsoft, I've got a couple questions regarding the requirement to "sign in with your Microsoft account":

With all your branding changes over the years, what's considered a Microsoft account today? My old Hotmail account, that existed from the days before Microsoft bought Hotmail? I think it's still alive, but I haven't logged in in the better part of a decade to find out. The accounts created over the years for various Xbox machines? I think those are still around, but I doubt I could get into them at this point. The "Live" account I had to create for MSDN many years ago? Once that job and associated need for MSDN ended I've not logged in to see if it's still around.

Which one(s) should I try to find login information for to use?

Furthermore, why must I sign in in the first place for the free version? I can understand signing in to associate the install with a paid version with extra features, but I see no reason to require it for free versions without any paid features.

4
satysin 11 hours ago 2 replies      
I really wish Microsoft had made UWP cross-platform. Would be pretty amazing if I could use UWP/C# to target Windows, Linux, macOS, iOS and Android properly. With UWP being limited to just Windows I don't see it ever being a success.
5
srcmap 11 hours ago 4 replies      
I used to be a big VB, VC++ fan boy a long time ago. 1995 :-) Have since move on....

Tried built a few opensource apps with VS once a year for the past few years and found that I can't even compile a single Windows open source packages from github, sourceforge after weeks of trying.

The code might claim to be able to build with VS10, VS12. The dependency libraries will need completely different VS version of .xml, .proj, .sln build systems.

I challenge the PM of VS product try to build a few popular MS projs such as python, VLC, or anything in http://opensourcewindows.org/. Document the process of building the app and dependence library. Compare that to the process of try to build that same packages in Mac (with brew) or in Linux.

In Linux, for all the packages I like play with. "./configure && make" handle most of the the build in a few minutes. Even easier on Ubuntu with apt-get source/build commands. Very similar process in Mac.

Even linux kernel, I can build it easily with pretty much the same 1-2 commands for the past 20 years.

6
kraig911 12 hours ago 2 replies      
Is this more than just Xamarin? I'm sorry -- I tried last time and that was the impression I got. I know it says it has asp.net core but can I truly build .net web services based apps now without parallels?
7
holydude 12 hours ago 3 replies      
The only problem I have with MS's ecosystem is their love to have a lot of concepts and name for everything. I am literally lost and I do not know what .NET/<whatever> is what and how it is used.

So is this just Xamarin repackaged ?

8
delegate 12 hours ago 3 replies      
Does it support C++ ? To me, "Visual Studio" is about C++ development and I miss a similarly powerful C++ IDE on the Mac.

From what I can see, it only supports C# (and family), so what good is it to a C++ / OSX dev ?

9
yread 12 hours ago 2 replies      
Microsoft Build is becoming an event where hell freezes over lately. VS on Mac, Linux on Windows, open source asp.net and .net, SQL Server on Linux
10
BugsJustFindMe 12 hours ago 2 replies      
It would be really nice to have a microsoft rep in here to answer questions. Because what I really want is visual studio that can build C++ win32 MFC executables without having to run Windows in a virtual machine. Can it do that? I don't know.
11
zamalek 12 hours ago 2 replies      
> Xamarin

Isn't this just MonoDevelop? Or have Microsoft added secret sauce to the mix?

12
vetinari 12 hours ago 1 reply      
Again, online installer only. Did something recently change something, that made difficult to make full, offline installer?

If yes, JetBrains didn't notice, because they are still able to do that for their products.

13
NDT 12 hours ago 1 reply      
I don't understand. I've been using VS on Mac for the past 3 months to develop C# applications for a class of mine. Was that just a beta? What's so different about this?
14
keithly 12 hours ago 0 replies      
15
nobleach 11 hours ago 0 replies      
I sincerely would LOVE to have an F# development IDE that didn't ask me to install Mono. I don't have anything against Mono, per se, I just want to see that Microsoft officially supports it across the three major platforms.
16
avenoir 3 hours ago 0 replies      
Is anybody doing professional development on .NET using VS for Mac? All this time i thought it was just Xamarin tools, but it looks like it actually has .NET Core project templates too. This has been the only thing that kept me away from Macs as a .NET dev.
17
legohead 9 hours ago 1 reply      
Crashes during install process for me. :\

Looks like during Xamarin installation: /Users/USER/Downloads/Install Visual Studio.app/Contents/MacOS/Install_Xamarin - Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSScrollView heightAnchor]: unrecognized selector sent to instance 0x6080003c0870'

Bummer.

18
kapuru 8 hours ago 1 reply      
Any .NET MVC developers here? I always wanted to learn ASP MVC, but never did because I was scared of the deployment situation on Linux. Has anything changed in that regard? Would you say deploying a .NET web app works almost as smooth on Linux as let's say a node.js app?
19
gaza3g 3 hours ago 0 replies      
I'm currently working on an MVC5 project on .NET 4.6.1 using VS2015 on Windows.

Can I load my solution on VS for Mac and have it work out of the box(restoring nuget packages...etc)?

20
zzbzq 7 hours ago 0 replies      
Coincidentally I was just using this & Xamarin Studio on mac today. I didn't realize VS Mac had released, I already had the beta.

So far I don't like it as much! Not sure what features are here I actually care about as I'm just using Mono. The pads no longer make sense in VS for Mac. I just have debug pads open all the time. I can't really tell when I've stopped debugging. There's weird buttons on the pads that do nothing. Not sure why all the clutter is here, Xamarin Studio had this stuff figured out.

21
blowski 12 hours ago 1 reply      
Anyone know what support is planned for other languages? e.g. Go, Ruby, and PHP.
22
mb_72 10 hours ago 0 replies      
More good news from the MS / Xamarin camp. A few years ago I 'bet the farm' on using Xamarin for Mac to develop a Mac version of our PC application (with shared code in a PCL); since that time Xamarin (and then MS/Xamarin after the buyout) have rarely failed to impress. Kudos to the team.
23
dohboy 10 hours ago 1 reply      
Same strategy as always. Rebrand current products and call it new. This is not Visual Studio as known from Windows but Xamarin Studio rebranded. Title should be Microsoft release Xamarin update...
24
mixedCase 11 hours ago 1 reply      
Again? I've seen the announcement for its release three times on HN.

And before someone mentions it, no I'm not confusing it with VS Code. I mean "Visual Studio for Mac", the Xamarin Studio fork.

25
rcarmo 8 hours ago 0 replies      
I've been waiting for this for a while. Only trouble so far is that the installer comes up in the wrong locale for me (it ignores the language ordering in Preferences and displays the installer in my secondary/input language, not English, unlike fully native apps).
26
JohnnyConatus 7 hours ago 0 replies      
Is VS for Mac recommended for typescript development? I'm using VS code right now.
27
jhasse 8 hours ago 1 reply      
This is still using GTK+ (3?), right?

How did they manage to integrate the buttons in the title bar with it?

28
baltcode 8 hours ago 1 reply      
Is there any way to run this/download a compatible version for OSX 10.9?
29
jhwhite 11 hours ago 0 replies      
Is this VS? Or just Xamarin? Could I do Python development on it like I can with Win VS?
30
relyks 12 hours ago 4 replies      
Will this allow you to make cross-platform Windows Forms applications?
31
Clobbersmith 7 hours ago 2 replies      
Is there a reason why the installer is in french? My preferred language is set to english.
32
perseusprime11 12 hours ago 0 replies      
Visual Studio Code is the way to go on Mac.
33
alex_suzuki 10 hours ago 0 replies      
Any chance we're going to get a Hololens development environment for the Mac anytime soon?
34
DeepYogurt 8 hours ago 0 replies      
Hit download and get a popup for a free 60 day course. No thanks.
35
EGreg 4 hours ago 0 replies      
Can this do PHP and Javascript / Web Development?

Objective C? Swift?

36
exabrial 8 hours ago 0 replies      
I don't even know what's real anymore...
37
jbmorgado 12 hours ago 0 replies      
I can't really understand the full depth from the announcement, but to me this looks like something that already existed for a few years, Xamarin.

What are the diferences between this product and Xamarin for MacOS (something that already existed)?

38
genzoman 12 hours ago 1 reply      
first rate development experience on mac. MS is slaying it lately.
39
wkirby 11 hours ago 1 reply      
Is the installer in Chinese for anyone else?
40
minhoryang 8 hours ago 0 replies      
So beautiful!
41
itsdrewmiller 11 hours ago 0 replies      
Let me know when this supports .NET 4.
42
adultSwim 12 hours ago 1 reply      
43
mcjon77 12 hours ago 0 replies      
No lie, when I saw the title of this thread for a few seconds I was confused and wanted to check my calendar. I kept thinking "Is this April 1st?".
44
bitmapbrother 7 hours ago 0 replies      
Does Visual Studio for Mac have the same functionality as Visual Studio for Windows? If not then they should really stop confusing customers by rebranding a product that had nothing to do with Visual Studio for Windows.
9
The tragedy of 100% code coverage (2016) ig.com
527 points by tdurden  2 days ago   337 comments top 71
1
cbanek 2 days ago 12 replies      
I've had to work on mission critical projects with 100% code coverage (or people striving for it). The real tragedy isn't mentioned though - even if you do all the work, and cover every line in a test, unless you cover 100% of your underlying dependencies, and cover all your inputs, you're still not covering all the cases.

Just because you ran a function or ran a line doesn't mean it will work for the range of inputs you are allowing. If your function that you are running coverage on calls into the OS or a dependency, you also have to be ready for whatever that might return.

Therefore you can't tell if your code is right just by having run it. Worse, you might be lulled into a false sense of security by saying it works because that line is "covered by testing".

The real answer is to be smart, pick the right kind of testing at the right level to get the most bang for your buck. Unit test your complex logic. Stress test your locking, threading, perf, and io. Integration test your services.

2
mannykannot 1 day ago 2 replies      
There are a few relevant facts that should be known to everyone (including managers) involved in software development, but which probably are not:

1) 100% path coverage is not even close to exhaustively checking the full set of states and state transitions of any usefully large program.

2) If, furthermore, you have concurrency, the possible interleavings of thread execution blow up the already-huge number of cases from 1) to the point where the latter look tiny in comparison.

3) From 1) and 2), it is completely infeasible to exhaustively test a system of any significant size.

The corollary of 3) is that you cannot avoid being selective about what you test for, so the question becomes, do you want that decision to be an informed one, or will you allow it to be decided by default, as a consequence of your choice to aim for a specific percentage of path coverage?

For example, there are likely to many things that could be unit-tested for, but which could be ruled out as possibilities by tests at a higher level of abstraction. In that case, time spent on the unit tests could probably be better spent elsewhere, especially if (as with some examples from the article) a bug is not likely.

100% path coverage is one of those measures that are superficially attractive for their apparent objectivity and relative ease of measuring, but which don't actually tell you as much as they seem to. Additionally, in this case, the 100% part could be mistaken for a meaningful guarantee of something worthwhile.

3
iamleppert 1 day ago 15 replies      
The worse the developer, the more tests he'll write.

Instead of writing clean code that makes sense and is easy to reason about, he will write long-winded, poorly abstracted, weird code that is prone to breaking without an extensive "test suite" to hold the madness together and god forbid raise an alert when some unexpected file over here breaks a function over there.

Tests will be poorly written, pointless, and give an overall false sense of security to the next sap who breaths a sigh of relief when "nothing is broken". Of course, that house of cards will come down the first time something is in fact broken.

I've worked in plenty of those environments, where there was a test suite, but it couldn't be trusted. In fact, more often than not that is the case. The developers are a constant slave to it, patching it up; keeping it all lubed up. It's like the salt and pepper on a shit cake.

Testing what you do and developing ways to ensure its reliable, fault-tolerant and maintainable should be part of your ethos as a software developer.

But being pedantic about unit tests, chasing after pointless numbers and being obsessed with a certain kind of code is the hallmark of a fool.

4
mikestew 2 days ago 2 replies      
The tragedy of 100% code coverage is that it's a poor ROI. One of things that stuck with me going on twenty years later is something from an IBM study that said 70% is where the biggest bang-for-the-buck is. Now maybe you might convince me that something like Ruby needs 100% coverage, and I'd agree with you since some typing errors (for example) are only going to come up at runtime. But a compiled (for some definition of "compiled") language? Meh, you don't need to check every use of a variable at runtime to make sure the data types didn't go haywire.

The real Real Tragedy of 100% coverage is the number of shops who think they're done testing when they hit 100%. I've heard words to that effect out of the mouth of a test manager at Microsoft, as one example. No, code coverage is a metric, not the metric. Code coverage doesn't catch the bugs caused by the code you didn't write but should have, for example. Merely executing code is a simplistic test at best.

5
algesten 2 days ago 3 replies      
My main issue with unit testing is what defines a unit?

Throughout my career I find tests that tests the very lowest implementation detail, like private helper methods, and even though a project can achieve 100% coverage it still is no help avoiding bugs or regression.

Given a micro service architecture I now advocate treating each service as a black box and focus on writing tests for the boundaries of that box.

That way tests actually assist with refactoring rather than be something that just exactly follows the code and breaks whenever a minor internal detail changes.

However occasionally I do find it helpful map out all input/output for an internal function to cover all edge cases. But that's an exception.

6
xg15 1 day ago 0 replies      
I agree (mostly) with the authors standpoints, but his arguments to get there are not convincing:

> You don't need to test that. [...] The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code.

The code invokes a user-passed callback to register another callback and specifies some internal logic if that callback is invoked. I personally don't find that obvious at all.

Others may find it obvious. That's why I think, if you start with the notion "this is necessary to test, that isn't", you need to define some objective criteria when things should be tested. Relying on your own gut feeling (or expecting that everyone else magically has the same gut feeling) is not a good strategy.

If I rewrite some java code from vanilla loops-with-conditionals into a stream/filter/map/collect chain, that might make it more obvious, but it wouldn't suddenly remove the need to test it, would it?

>"But without a test, anybody can come, make a change and break the code!"

>"Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it."

You could make that argument against any kind of automated test. So should we get rid of all kinds of testing?

Besides, the argument doesn't even make sense. No one is using tests as a security feature against "evil" developers (I hope). (One of) the points of tests is to be a safeguard for anyone (including yourself) who might change the code in the future and might not be aware of all the implications of that change. In that scenario, it's very likely you change the code but will have a good look at the failed test before deciding what to do.

7
pg314 1 day ago 1 reply      
The article illustrates what happens when you have inexperienced or poor developers following a management guideline.

To see how 100% coverage testing can lead to great results, have a look at the SQLite project [1].

In my experience, getting to 100% takes a bit of effort. But once you get there it has the advantage that you have a big incentive to keep it there. There is no way to rationalise that a new function doesn't need testing, because that would mess up the coverage. Going from 85% to 84% coverage is much easier to rationalise.

And of course 100% coverage doesn't mean that there are no bugs, but x% coverage means that 100-x% of the code is not even run by the tests. Do you really want your users to be the first ones to execute the code?

As an anecdote, in one project where I set the goal of 100% coverage, there was a bug in literally the last uncovered statement before getting to 100%.

[1] https://www.sqlite.org/testing.html

8
eksemplar 1 day ago 2 replies      
We've almost stopped unit testing. We still test functionality automatically before releasing anything into production, but we're not doing a unit test in most cases

Our productivity is way up and our failure rates haven't changed. It's increased our time spent debugging, but not by as much as we had estimated that it would.

I won't pretend that's a good decision for everyone. But I do think people take test-driven-development a little too religiously and often forget to ask themselves why they are writing a certain unit test.

I mean, before I was a manager I was a developer and I also went to a university where a professor once told me I had to unit test everything. But then, another professor told me to always use the singleton pattern. These days I view both statements as equally false.

9
penpapersw 2 days ago 2 replies      
I think a bigger epidemic is we're putting too much emphasis on "do this" and "do that" and "if you don't do this then you're a terrible programmer". While that sometimes may be true, much more importantly is to have competent, properly trained professionals, who can reason and think critically about what they're doing, and who have a few years of experience doing this under their belt. Just like other skilled trades, there's a certain kind of knowledge that you can't just explain or distill into a set of rules, you have to just know it. And I see that in the first example in this article, where the junior programmer is writing terrible tests because he just doesn't know why they're bad tests (yet).
10
hibikir 2 days ago 1 reply      
I might be completely wrong on this one, but it seems to me that a lot of the precepts of TDD and full code coverage have a lot to do with the tools that were used by some of the people that popularized this.

Some of my day involves writing Ruby. I find using Ruby without 100% code coverage to be like handling a loaded gun: I can track many outages to things as silly as a typo in an error handling branch that went untested. A single execution isn't even enough for me: I need a whole lot of testing on most of the code to be comfortable.

When I write Scala at work instead, I test algorithms, but a big percentage of my code is untested, and it all feels fine, because while not every piece of code that compiles works, the kind of bugs that I worry about are far smaller, especially if my code is type heavy, instead of building Map[String,Map[String,Int]] or anything like that. 100% code coverage in Scala rarely feels as valuable as in Ruby.

Also different styles make the value of having tests as a way to try to force good factoring changes by language and paradigm. Most functional Scala doesn't really need redesigning to make it easy to test: Functions without side effects are easy, and are easier to refactor. A deep Ruby inheritance tree with some unnecessary monkey patching just demands testing in comparison, and writing the tests themselves forces better design.

The author's code is Java, and there 95% of the reason for testing that isn't purely based on business requirements comes from runtime dependency injection systems that want you to put mutability everywhere. Those are reasons why 100% code coverage can still sell in a Java shop (I sure worked in some that used too many of the frameworks popular in the 00s), but in practice, there's many cases where the cost of the test is higher than the possible reward.

So if you ask me, whether 100% code coverage is a good idea or not depends a whole lot on your other tooling, and I think we should be moving towards situations where we want to write fewer tests.

11
userbinator 1 day ago 2 replies      
But remember nothing is free, nothing is a silver bullet. Stop and think.

I'm going to be the one to point at the elephant in the room and say: Java. More precisely, Java's culture. If you ask developers who have been assimilated into a culture of slavish bureaucratic-red-tape adherence to "best practices" and extreme problem-decomposition to step back and ask themselves whether what they're doing makes sense, what else would you expect? These people have been taught --- or perhaps indoctrinated --- that such mindless rule-following is the norm, and to think only about the immediate tiny piece of the whole problem. To ask any more of them is like asking an ostrich to fly.

The method names in the second example are rather WTF-inducing too, but to someone who has only ever been exposed to code like that, it would probably seem normal. (I counted one of them at ~100 characters. It reminds me of http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom... )

Many years ago I briefly worked with enterprise Java, and found this sort of stifling, anti-intellectual atmosphere entirely unbearable.

12
saltedmd5 1 day ago 1 reply      
The big error being made in this article (and most of the comments here) is the assumption that the purpose of unit tests is to "catch bugs." It isn't.

The purpose of unit tests is to document the intended behaviour of a unit/component (which is not necessarily a single function/method in isolation) in such a way that if someone comes along and makes a change that alters specified behaviour, they are aware that they have done so and prevented from shipping that change unless they consciously alter that specification.

And, if you are doing TDD, as a code structure/design aid. But that is tangential to the article.

13
state_less 1 day ago 0 replies      
Unit tests are a poor substitute for correctness. Many unit tests does not a strong argument make.

Unit tests are typically inductive. Developer shows case A, B and C give the expected results for function f. God help us if our expectations are wrong. So, you're saying since A, B and C are correct therefore function f is correct. Well that may be, or maybe A, B and C are trivial cases, in other words, you've made a weak argument.

100% test coverage sounds like lazy management. Alas, the manager may have worked their way via social programming rather than computer programming. In such cases, better to say you have 110% test coverage.

14
circlefavshape 1 day ago 2 replies      
/me raises hand on the pro-testing side

I've been programming for a living since 1996, and only recently started to do TDD in the normal sense of writing unit tests before writing code. I've found to it to be an enormous help with keeping my code simple - the tests or the mocking getting difficult is a great indicator that my code can be simplified or generalised somehow

I argued for functional instead of unit testing for years, but it was only when a team-mate convinced me to try unit testing (and writing the tests FIRST) that the scales fell from my eyes. Unit testing isn't really testing, it's a tool for writing better code.

BTW from an operational perspective I've found it's most effective to insist on 100% coverage, but to use annotations to tell the code coverage tool to ignore stuff the team has actively decided not to test - much easier to pick up the uncovered stuff in code review and come to an agreement on whether it's ok to ignore

15
johnwatson11218 1 day ago 2 replies      
Not sure if this is already mentioned but for me the most concise illustration of this fallacy was in The Pragmatic Programmer book. They had a function like this:

double f( double x ) { return 1/ x; }

They pointed out that it is trivial to get 100% coverage in test cases but unless your tests include passing in 0 as the parameter you are going to miss an error case.

16
biztos 1 day ago 1 reply      
A lot of people here seem to have strong opinions against 100% coverage, so I'll risk their ire with my strong opinion in favor.

If you have, say, 95% coverage -- and most corporate dev orgs would be thrilled with that number -- and then you commit some new code (with tests) and are still at 95%, you don't know anything about your new code's coverage until you dig into the coverage report. Because your changes could have had 100% coverage of your new thing but masked a path that was previously tested; or had 10% but exercised some of the previously missing 5%.

If you have 100% coverage and you stay at 100% then you know the coverage of your new code: it's 100%. Among other things this lets you use a fall in coverage as a trigger: to block a merge, to go read a coverage report, whatever you think it warrants.

Also, as has been noted elsewhere, anything other than a 100% goal means somebody decides what's "worth" testing... and then you have either unpredictable behavior (what's obvious to whom?) or a set of policies about it, which can quickly become more onerous than a goal of 100%.

It's important to remember that the 100% goal isn't going to save you from bad tests or bad code. It's possible to cheat on the testing as well, and tests need code review too. There's no magic bullet, you still need people who care about their work.

I realize this might not work everywhere, but what I shoot for is 100% coverage using only the public API, with heavy use of mock classes and objects for anything not directly under test and/or not stable in real life. If we can't exercise the code through the public API's then it usually turns out we either didn't rig up the tests right, or the code itself is poorly designed. Fixing either or both is always a good thing.

I don't always hit the 100% goal, especially with legacy code. But it remains the goal, and I haven't seen any convincing arguments against it yet.

Open the flame bay doors, Hal... :-)

17
apo 1 day ago 0 replies      
You don't need to test that. ... The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code.

Here's the code:

 @Override public void initialize(WatchlistDao watchlistDao) { watchlistDao.loadAll(watchListRow -> watchlists.add(watchListRow)); }
Maybe I'm dense, but this code raises at least one question that I would prefer to see answered by tests.

The parameter watchlists appears to be defined in a scope above the one under test. What happens if watchlists is null for some reason? What should be the behavior?

Then there's the tricky question of what to do as this method evolves. Next month, a watchListRow might need to be updated with a value before being added to watchlists. Later, a check might be added to ensure some property exists on watchListRow. At what point will a test be written for this method?

18
coding123 2 days ago 2 replies      
I wish people cared more about the craft of an amazing plugin architecture or an advanced integration between a machine learning system and a UI, but no, more and more of our collective development departments care more about TDD and making sure things look perfect. Don't worry about the fact that there are no integration tests and we keep breaking larger systems, and while there might be 100% code coverage, no developer actually understands the overall system.
19
kabdib 2 days ago 0 replies      
I've seen projects where management had rules like "you must have 70% code coverage before you check in". Which is crazy, for a lot of reasons.

But the developer response in a couple cases was to puff the code up with layers of fluff that just added levels of abstraction that just passed stuff down to the next layer, unchanged, with a bunch of parameter checking at each new level. This had the effect of adding a bunch of code with no chance of failure, artificially increasing the amount of code covered by the tests (which, by the way, were bullshit).

I got to rip all that junk out. It ran faster, was easier to understand and maintain, and I made sure I never, ever worked with the people who wrote that stuff.

20
devrandomguy 1 day ago 2 replies      
If you can prove that your testing process is perfect, then your entire development process can then be reduced to the following, after the test suite is written:

 cat /dev/random | ./build-inline.sh | ./test-inline.sh | tee ./src/blob.c && git commit -Am "I have no idea how this works, but I am certain that it works perfectly, see you all on Monday!" && git push production master --force
When presented like this, relying on human intelligence and experience doesn't seem like such a bad thing after all.

Just so we're clear, my username was not inspired by this scheme.

21
jganetsk 1 day ago 1 reply      
He's right, but he's conflating 100% code coverage with using mocks with writing tests.

Always write tests. And strive for maxmium coverage. But make sure you write the right kinds of tests:

- Don't overuse mocks. Mocks don't represent real conditions you would actually see in production. Favor using real dependencies over mocks.

- Don't overspecify your tests. Test only publicly specified parts of the contract. Things that you need to be true and that the callers of the module expect to be true. And yes, you will change the test when the contract changes.

22
lowbloodsugar 1 day ago 1 reply      
I once joined a company that had 90% code coverage. After a while it became clear that there were all vanity tests: I could delete huge swathes of code with zero test failure. We let the contractors that wrote it move on, and we formed a solid team in house. We don't run code coverage any more because it makes the build run four times slower. Instead, I trust our teams to write the good tests. Sometimes that means <100% coverage, and the teams are able to justify it.

Some feedback on the article:

>Test-driven development, or as it used to be called: test-first approach

Test-first is not the same as Test-Driven. The test-first approach includes situations where a QA dev writes 20 tests, and then hands them to an engineer who implements them. Thats not TDD.

>"But my boss expects me to write test for all classes," he replied.

That's very unlikely to be TDD. "Writing tests because I've been told to" is never likely to be "I'm writing the tests that I know to be necessary", and that's all TDD is: writing necessary tests. If the test isn't necessary, then neither is the code.

>Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it.

Sure. But then their name is on that act in the commit log. The test is a warning. I've been lucky not to have worked with evil developers, but I have worked with some clueless ones, and indeed some have just deleted tests. Thats an opportunity for education, and quality has steadily improved.

>The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.

Totally agree. So many programmers and teams practice cargo cult behaviors. Unfortunately, this article is one of them: making claims about TDD, and unit tests in general, without understanding "why" TDD is effective.

23
WalterBright 1 day ago 1 reply      
Of course any metric can be rendered useless if one "works the metric" rather than the intent of the metric.

But in my experience, code covering unit tests have correlated strongly with faster development and far fewer bugs being uncovered in the field.

24
xutopia 1 day ago 1 reply      
The tragedy he mentions aren't 100% code coverage. It's more about people using the wrong tools for the job and using 100% coverage as an indication that everything is fine.

100% coverage for my team means that we were intentional about our code. It's not hard at all to have 100% coverage in a Ruby application as it is possible to do a lot with very little code.

Furthermore it allows us to bring in a junior on the team because we know they have a safety net.

Also for the record we do code reviews and are very thoughtful about the code we write. 100% coverage does not stop the possibility of some bugs inserting themselves somewhere.

25
djtriptych 2 days ago 0 replies      
My version of this is working on a team with 100% coverage that still saw a steady and heavy influx of bugs. 100% coverage does not mean bug free.

I advocate spending time on identifying/inventing the correct abstractions over coverage.

26
seabornleecn 2 hours ago 0 replies      
I think pursuing 100% test coverage is not a fixed state, it is a must have process to learn how to write tests.

Think about one question first: why did the manager force develop to achieve 100% coverage?There must have some benefits, or the manager might come from the competitor.When standing at a higher position, think of time and organization factors, it might be a good choice.If every engineer in the corporate has the deeply understanding of test coverage as the author, they really do not need to pursue 100% coverage.But in reality, we can see many companies which do not pursue test coverage, their coverage tend to be 0. That's why we need force 100% test coverage in a short time. Engineers need time to form the habit of test their code, and then experience the pain of bad tests. Then they start to think what kind of tests are valuable.

27
dcw303 2 days ago 0 replies      
I think you should write a test.

Naming the test just "initialise" is not very useful as it doesn't assert what you expect the method under test to do. Given that the purpose of the initialise function is to populate a watchlists collection variable from the parameter, i'd name the test something like "initialise_daoRecordCountIs9_watchlistCountIs9". The pattern I generally use is <method_name>_<assertion_under_test>_<expected_result>.

Then, my test would be the following:

* Set up / mock the dao parameter to have 9 rows

* Create an instance of the class under test and push in the dao parameter

* Verify / Assert that the class under test now has 9 items in the watchlists variable - I'm assuming there is a public method to access that.

28
lacampbell 1 day ago 0 replies      
I feel like this high test coverage thing can only work if you have tight modules, tight interfaces, and you only bother testing at module boundaries. So the test cases almost function as a bit of executable API documentation - here's the method name, here's what it does, here's the contracts and/or static types, and.... given this input, you should get this output.

Do it for the high level bits you actually expose. If you're exposing everything, tests won't really save you - architecture and modularity are more fundamental and should be tackled first. If you're writing a big ball of mud, what benefit do you get testing a mudball?

29
ajmurmann 2 days ago 0 replies      
100% code coverage and even TDD'd doesn't and shouldn't mean 100% unit tested. Glue code and declaration doesn't need a unit test. Some functional tests should provide all the coverage needed to give you confidence to refractor that code in the future.

Edit: while I'm a huge TDD advocate, I'm not a big advocate of measuring code coverage. That should only be necessary if you are trying to get a code base under coverage that wasn't TDD'd. Even then I'd rather add the coverage as I'm touching uncovered code. If it works and I'm not touching it, it doesn't need tests.

30
brlewis 2 days ago 0 replies      
There's a human tendency to overemphasize things you can quantify. So we try to figure out how to test every code path rather than what we should do: try to figure out which inputs we should test against.
31
koonsolo 1 day ago 1 reply      
I use the following list to decide on creating a unit test or not. More yeses means a unit test is a good idea.

1. Is it hard to instantly test the code when implementing it? (Might be the case for library code)

2. Is there a chance the underlying implementation might change (and so might break in the future)?

3. Will the interface of the class remain stable? (If not unit test needs to be rewritten too)

4. Will functional tests pass when something breaks in this class?

32
jondubois 1 day ago 0 replies      
Agreed. I would rather have 5% test coverage that checks against all risky edge cases/inputs than 100% test coverage that checks against arbitrary, low-risk inputs.

Writing tests to confirm the simplest, most predictable use cases is a waste of time - Those cases can be figured out very quickly without automated testing because they are trivial to reproduce manually.

33
Ace17 1 day ago 0 replies      
Having 100% code coverage is like having 0 warnings (although it certainly is a lot harder).In this situation, your tools are not telling you "all's good", but rather "I can't detect anything suspect here".

There's a good chance that the dev time needed to go from 90% coverage to 100% coverage might be better spent somewhere else.

34
rumcajz 1 day ago 0 replies      
I've seen a project with 100% unit test coverage, yet no e2e tests. Nobody knew whether the product worked at all.
35
yoav_hollander 1 day ago 0 replies      
One point already made by several people on this thread is that code coverage, while helpful, is not enough (and perhaps is not even the best bang for the buck).

In hardware verification (where I come from, and where the cost of bugs is usually higher), "functional coverage" is considered more important. This is usually achieved via constraint-based randomization (somewhat similar in spirit to QuickCheck, already mentioned in this thread).

I tried to cover (ahem) this whole how-to-use-and-improve-coverage topic in the following post: https://blog.foretellix.com/2016/12/23/verification-coverage...

36
DTrejo 2 days ago 0 replies      
Has anyone seen a fuzzer that creates variants based on a test suite with 100% coverage? Hmm... the fuzzer still wouldn't necessarily know how to create the correct invariants. #lazyweb
37
hultner 1 day ago 1 reply      
Back when I learnt Haskell we had a lecturer named John Hughes who had co-authored a tool named QuickCheck[1]. We used this tool extensively throughout the course, with it testing were quite simple and writing elegant generators were a breeze. In my experience, these test did a much more greater job at finding edge cases then many unit tests I've seen in larger close to full coverage TDD projects.

As with much else TDD should be a tool with the ultimate goal of aiding us in writing correct and less bug riddled code, once the tool adds more work it's no longer offering much aid.

[1] . https://en.wikipedia.org/wiki/QuickCheck

38
zxcmx 2 days ago 1 reply      
I wish more people cared about path coverage as opposed to "line coverage".
39
waibelp 1 day ago 0 replies      
This remembers me of recent projects where developers started to mock every piece of code.. The result was that all tests passed while the codebase exploded in real environments.

In my opinion the best advice is to force developers to use their brains. I know, there are a lot of sh*tty CTO/CEO/HoIT/SomeOther"Important"Position people out there seeing them as code monkeys and saying that developers are not paid to think but in that case the best thing developers could do is learn to say "NO"... My experience with that kind of people is that they need to learn the meaning of "NO" instead of wasting time and money in the end of the day.

40
elchief 1 day ago 1 reply      
What you actually want to do is test the methods with the highest cyclomatic complexity first (where it's greater than 1)

IntelliJ has a plugin

41
keithnz 1 day ago 0 replies      
I've heard Kent Beck talk about having the smallest amount of tests that give you confidence.

Which he also gave as an answer here:

http://stackoverflow.com/questions/153234/how-deep-are-your-...

But I know a lot of people in the early days of XP went to extremes, 100% code coverage, mutation tools for every condition to ensure unit tests broke in expected ways, etc. But they were more experiments in pushing the limits rather than things that gave productivity gains.

42
falcolas 1 day ago 0 replies      
IMO, if I ever have 100% code coverage, I did something wrong. The best I can usually achieve is 95-98%, because of my defensive coding to warn about the "impossible" use cases.

Escape a `while True` loop? Log it, along with the current state of the program, and blow up (so we can be restarted). Memory allocation error? Log it. The big "unexpected exception" clause around my main function? Log it.

If I do hit those in testing, my code is wrong.

43
rothron 1 day ago 1 reply      
I don't think I know anyone that do TDD. Uncle Bob has indoctrinated a few zealots into that mindset, but it all comes off as crazy to me. A germ of a good idea taken way too far.

People of that school tend to write tests that test implementation rather than functionality. As a result you get fragile tests that break not telling you what went wrong but how the implementation has changed.

Good tests should test behavior. A change in implementation shouldn't break the test.

44
xmatos 1 day ago 1 reply      
Build tests against your app`s public interface. On a web app, that would be your controllers or API.

That will give you good coverage, while avoiding too simple to be useful unit tests.

It's really hard to foresee all possible input variations and business logic validations, but that doesn't mean your test suite is useless.

It just means it will grow everytime you find a new bug and you are guaranteed that one won't happen again...

45
ioquatix 2 days ago 1 reply      
I have 100% code coverage on a couple of projects. It has two benefits:

Behaviour is completely covered by tests, so changes in APIs which might break consumers of the library will at least be detected.

New work on the library tends to follow the 100% coverage by convention, so it's somewhat easier to maintain. Apps that have 90% coverage, for example, tend to slip and slide around. Having 100% coverage projects the standard "If your contribution doesn't have 100% coverage it won't be accepted". I don't think this is a bad default position.

46
vinceguidry 1 day ago 0 replies      
I noticed the author was speechless in two situations, both of which involved "but we write all our tests in <test-framework>." This is legitimate and should be taken more seriously by the author.

Codebases serve businesses and businesses value legibility over efficacy. It's more important to them to have control over their assets than to have better assets. Using one test framework is in perfect service of that goal.

It's inefficient in that it will take future developers more time to understand that code. But fewer architectural elements means that you can get by with less senior programmers.

Imagine if you went onto a software project and they were using 6 different databases because every time they had a new kind of data that they wanted to access differently, they reached for another database rather than use the one they had.

Of course nobody would ever do that, well I hope anyway, but I do see a lot of unnecessary architectural complication in projects in service of "using the right tool for the job." And it can balloon. A new test framework has to work in your CI framework. You need to decide how to handle data. It's not a huge decision, but it's more complicated then most devs would think and it'll take up more of your time than you'll expect.

You can generalize this to the the main thrust of the article. 100% code coverage is not a bad goal to want to hit. Sure, you're going to get a lot of waste. But you're not paying for it, your employer is. And your employer might have a different idea of which side of the tradeoff he wants to be on and where to draw the line. You know the code way better than they will, but they know the economics far better than you ever could.

47
michaelfeathers 1 day ago 1 reply      
Coverage isn't the goal. The goal is understanding.

Write a test if you don't feel confident that a piece of code does what you think it does. If you're not sure what it does now, there's little chance that you or anyone else will in the future, so write a test to understand it and to make that understanding explicit.

Use curiosity as a driver.

48
Dove 1 day ago 0 replies      
Automated tests are code, and come with all the engineering and maintenance concerns of 'real' code. They don't do anything for your customers, though, so are only appropriate when they actually make your work faster or safer.

Automated tests are a spec, and are exactly as hard to write completely and correctly, and as easy to get wrong in ignorance, as a 'real' spec. If you find them easy to write, odds are good you would find the code easy to visually verify as well - which is to say, you're working on a trivial problem.

They have their place, but that place is not everywhere. It is where they are efficient and valuable. I particularly look for places where they are like the P half of an NP problem, an independent estimate of the answer to a math problem. If you ever find yourself writing the same code twice, unless it's a safety-critical system or something, that's a moment to stop and reflect on the value of what you are doing.

49
mirko22 1 day ago 0 replies      
The title does not mean anything and is basically a click bait in my opinion as it sounds cool to trash some ideal.That said, the magic 100% number is far removed from reality and does not represent anything by itself.

100% coverage on project of which size? Imagine you have a single script project that does exactly one thing and 2 test are enough to verify that it works without doing it manually? That is not the same as writing test for file system or tests which consists mostly of mocks upon mocks upon mocks.

I think the real problem is someone comes up with an idea, like TDD, tells people about it, some people hear about, start preaching it, some people start believing it and nobody actually think things through, usually cos they don't have experience (it's not a fetish as someone said). Like everything in life, you have to think things through before doing them, ask your self is this worth doing and when it is worth doing. You can't just say: "Oh we are doing TDD thus everything must be done in TDD way".

For people that say tests are useless, or good code does not need tests, I ask, when you make a change do you still make sure your code works by hand? And if you do make sure, why don't you automate that? You are a programmer after all.

And for those that say you need to test everything, well you don't, specially if you need mock most of it or it is really not that important piece of code as it is dev tool or something. What you want to make sure works is customer/user facing stuff that must work for you to get paid and you want to be able to verify this at any time of day without losing hours clicking around checking for stuff.

So this is not straight forward, 100% means nothing without context and doing anything in excess and without valid reasons is pointless or even harmful. And this has nothing to do with programming but life in general.

50
ishtu 1 day ago 0 replies      
>Testing is usually regarded as an important stage of the software development cycle. Testing will never be a substitute for reasoning. Testing may not be used as evidence of correctness for any but the most trivial of programs. Software engineers some times refer to "exhaustive" testing when in fact they mean "exhausting" testing. Tests are almost never exhaustive. Having lots of tests which give the right results may be reassuring but it can never be convincing. Rather than relying on testing we should be relying in reasoning. We should be relying on arguments which can convince the reader using logic. http://www.soc.napier.ac.uk/course-notes/sml/introfp.htm
51
antirez 1 day ago 0 replies      
Code coverage is an illusion, since what you want is actually "possible states coverage". You can cover all the lines of your code and still cover a minority of the possible states of the program, and especially a minority of the most probable states of the program when the actual users execute it, or you can cover 50% of your lines of code and yet cover much more real world states, and states for which it is more likely to find a bug. I think that more than stressing single features with unit tests, it is more useful to write higher level stress tests (fuzz tests, basically) that have the effect of testing lines of code as a side effect of exploring many states of the program. Specific unit tests are still useful, but mostly in order to ensure that edge cases and main normal behavior corresponds to the specification. As in everything it is develooper's sensibility that should drive what test to write.
52
reledi 1 day ago 0 replies      
I've found that some bootcamps are responsible for this attitude as they preach to have 100% coverage. And no one really questions the experienced and heavily opinionated teacher.

It's good to use hyperbole black and white when teaching so the point comes across easier. But they should be made aware of caveats before they graduate at least.

53
knodi123 1 day ago 0 replies      
I recently broke a unit test by adding one entry to a hash constant (a list of acceptable mime types and their corresponding file extensions). I looked at the test, and it was just comparing the defined constant, to a hardcoded version of itself.

I rewrote the test by converting the constant to a string, taking a checksum of it, and comparing _that_ to a short hardcoded value. Now the test is just 1 line of code, instead of 41! Then I put it through code review, and my team said "What a ridiculous test." But they didn't see any problem in the previous version that compared it to a 40-line hardcoded hash.

It's a weird world.

54
mjevans 2 days ago 0 replies      
I think I'd rather focus on documenting the information flow. Of having the tools to track down where things start to go wrong when there's a problem and I ask things to run with more verbosity.

Initial "complete coverage" should probably start from mockups that test an entire API. The complete part should be that, in some way, the tests cover expected successes AND failures (successfully return failure) of every part of the API, but there's no need to test things individually if they've already been tested by other test cases.

Invariably reality will come up with more cases and someone will notice an area that wasn't quite fully tested. That's where a bug exists, but the golden test cases probably wouldn't have located it anyway. It'll take thousands or millions of users to hit that combination and notice it. Then you get to add another test case while you're fixing the problem.

55
josteink 20 hours ago 0 replies      
> The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.

Totally agree. You can say this about lots of things really and not just tests.

56
tommikaikkonen 1 day ago 0 replies      
Property-based testing has made testing more productive and fun for me. You write a few lines of code that produce a large amount of tests. The idea is obviously so useful, I'm surprised it's uncommon in practice. When you think about coverage in terms of inputs applied instead of statements executed, property-based testing is far more productive than writing tests by hand.

It's not a silver bullet though. Some property-based tests are easy to write but offer little value. Sometimes you spend more time writing code to generate the correct inputs than the value of the test warrants. It has a learning curve. Still, I think it is the most powerful tool you can master for testing.

57
vitro 1 day ago 0 replies      
To paraphrase: "Premature testing is the root of all evil".

How I do it is going from rough testing of pages and components to granular testing of those parts which had some error.

For pages, I just run them to see if they display without producing errors, same goes for critical components. This gets me the feeling of roughly tested and from the user perspective working system with little time investment.

Then I test critical business logic, but usually only after some error was reported.

Mind though that I am freelance developer unconstrained by organizational rules.

58
afpx 1 day ago 0 replies      
Many of us have made similar mistakes (especially early in our careers) when taking on new techniques for which we became particularly enthralled. That's why it's a good idea to have a couple 'elders' on-staff so as to not allow youthful passion to wreck havoc. They tend to keep teams pragmatic and lazy (a good thing, in programming).

For instance, I remember all the bad code that I wrote and read circa 1997-1999, after design patterns became the rage.

59
raverbashing 1 day ago 1 reply      
Most of "we should go for 100% coverage" is simply cargo-culting (pushed by "gurus" like Uncle Bob - the negative aspects of the word guru implied)

Not to mention 100% coverage is not guarantee the system works, in practice quite the opposite

Not to mention this BDD crap which only makes my blood boil, it's syntactic yuck disguised as syntactic sugar

60
chmike 1 day ago 0 replies      
While 100% code coverage doesn't guarantee 0% bug, it's useful to easily detect new untested code addition and possible bug addition. Another point is that the code looks obviously right by visual inspection, but we want to automate the check. Relaxing the 100% coverage is a lazy slippery slope I don't take with my code.

The danger of 100% percent coverage is that the goal of tests becomes the 100% code coverage and not bug detection anymore.

61
jowiar 1 day ago 0 replies      
One of the pressures for 100% coverage is working in a non-typesafe language. The gospel of coverage largely evolved in the Ruby community, where I often see test suites that look like a handrolled typechecker.
62
Ace17 1 day ago 0 replies      
See the paper "On the Danger of Coverage Directed Test Case Generation".http://link.springer.com/chapter/10.1007%2F978-3-642-28872-2...The idea is that a test suite can have 100% coverage and still be a very bad test suite.
63
kelnos 2 days ago 1 reply      
I find that, as I'm building something from scratch, the vast majority of the errors I make are just things I didn't think of. Tests don't help there because I can't test on input that I don't even imagine happening. So I generally write few tests, because, to be honest, most code is trivial and algorithm-light. Sure, if I have to write a parser or something a bit more fiddly, I'll write a unit test to be sure that it's doing what I expect, but that tends to be the exception, not the rule. I do write my code with an eye toward later testability if it turns out to be necessary, but I find that to be fairly easy, and also a good measure of if I'm doing the write thing: most code that isn't testable is probably code that's difficult to read and maintain, anyway, so if I look at something and think "oof, how would I ever write a test for that?" I'll usually delete it and start over.

When I have something that should be working, I test it in a more functional/integrative manner, and move on.

Later, I'll write unit tests when I need to. If I want to refactor something, or drastically change the implementation of something, I'll write out some tests beforehand to be sure that the pre and post behaviors match.

I've always thought that TDD is just premature optimization. You're optimizing for the idea that you -- or someone -- will later need to make large enough changes to your code that you'd worry about breaking it. In my experience that's fairly rare, and you spend less time overall if you just write the tests as you need them, not up-front. Yes, writing a test when the code is fresh in your mind will be faster than writing it much later, but then you're writing a ton of test code that likely won't be necessary.

An objection I hear to this is that you're not just writing tests for yourself, you're writing tests for the others who will need to help maintain your code, perhaps after you're gone. I'm somewhat sympathetic to this, but I would also say that if someone else needs to modify my code, they damn well better first understand it well enough such that they could write tests before changing it (if they deem it necessary). Anything else is just irresponsible.

(Note that I primarily work in strongly statically typed languages. If I were writing anything of complexity in ruby/python/JS/etc., I don't think I'd feel comfortable without testing a lot of things I'd consider trivial in other languages.)

(Also note that some things are just different: if you're writing a crypto library, then you absolutely need to write tests to verify behaviors, in part because you're building something that must conform to a formal spec, or else it's less than worthless.)

64
reledi 1 day ago 0 replies      
Striving for 100% coverage is an expensive mistake because as a testing indicator it gives you a false sense of security. But someone has to pay for the time spent writing and maintaining those tests, and fixing the bugs that are still there.

I much prefer to use code coverage as a weak indicator for finding dead code.

65
divan 1 day ago 0 replies      
Good example of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."https://en.wikipedia.org/wiki/Goodhart%27s_law
66
BJanecke 1 day ago 0 replies      
So generally, I write a test when I want to make an assumption an certainty.If I can't be certain that something is doing what it's supposed to I write a test for it, make sense?

So

```Int => add(x, y) => x + y;```

Doesn't get a test, however

```Int => formulateIt(x, y) => (x * y)^y```

Does

67
hasenj 1 day ago 0 replies      
I think unit testing makes sense when you have a function doing some math that can't be easily verified to be sensible by merely glancing at the code for two minutes.

I'm not sure there's much use for it in other scenarios.

68
shusson 1 day ago 1 reply      
Has anyone read any papers about the relationship between code coverage and defects?
69
EugeneOZ 1 day ago 1 reply      
Laziness is a kind of populism. Such articles will be always upvoted.
70
crimsonalucard 1 day ago 0 replies      
It's a case of convention over common sense.
71
EngineerBetter 1 day ago 0 replies      
I'd suggest the tragedy here is an absence of kaizen and team processes that foster continuous improvement. If folks are doing inefficient things, that should be caught by the team in a retro or similar.
10
President Trump Dismisses FBI Director Comey washingtonpost.com
586 points by DamnInteresting  1 day ago   293 comments top 34
1
Animats 1 day ago 5 replies      
The FBI director is supposed to have a 10 year term. That went in after J. Edgar Hoover died. Nobody wanted another J. Edgar Hoover FBI Director for Life situation, but having the FBI director be a "pleasure of the President" appointment made it too political.

This makes Andrew G. McCabe acting FBI director. He's in the civil service, not a Presidential appointment. He was an FBI agent and worked his way up. From what little is available about him, he seems to be good at the job.[1] As civil service, he can only be fired for cause.

Appointing a new FBI director requires Congressional approval, and will be controversial.

[1] http://www.latimes.com/nation/na-la-fbi-deputy-director-2016...

2
fooey 1 day ago 5 replies      
Seemed to be confirmed as real, so here are letters being floated as from Trump, Sessions and Rosenstein firing Comey and blaming the Clinton investigation

Trump: https://pbs.twimg.com/media/C_apTsDXoAAVKYn.jpg

AG Sessions: https://pbs.twimg.com/media/C_apUYrXgAAihp2.jpg

Deputy AG Rosenstein: https://pbs.twimg.com/media/C_apVImXcAIKhfm.jpg

 Dear Director Comey: I have received the attached letters from the Attorney General and Deputy Attorney General of the United States recommending your dismissal as the Director of the Federal Bureau of Investigation. I have accepted their recommendation and you are hereby terminated and removed from office, effective immediately. While I greatly appreciate you informing me, on three separate occasions, that I am not under investigation, I nevertheless concur with the judgment of the Department of Justice that you are not able to effectively lead the Bureau. It it essential that we find new leadership for the FBI that restores public trust and confidence in its vital law enforcement mission. I wish you the best of luck in your future endeavors.

3
TheBiv 1 day ago 7 replies      
"Trump just fired the man leading a counterintelligence investigation into his campaign, on the same day that the Senate Intelligence commitee requested financial documents relating to Trump's business dealings from the treasury department that handles money laundering." -Comment from reddit that sums up how strange this is.
4
rwnspace 1 day ago 12 replies      
381 points, 171 comments, 1 hour ago; as of writing.

Why is this on the second page of HN, and not pole position? I assume/hope that there is some mechanism that stops new content from dominating other content too rapidly.

5
avs733 1 day ago 1 reply      
Sally Yates investigates Trump's cabinet: Fired by the Trump administration

Preet Bharara investigates Trump's cabinet: Fired by the Trump administration

Director Comey Investigates Trump's cabinet: Fired by the Trump administration

6
abalashov 1 day ago 2 replies      
I'm pretty sure the customary reply from the MAGA camp will be that these are all political appointees, and serve at the President's pleasure.

All that is formally true. But it doesn't make it any less uncanny that such a person would be fired at the very moment he ramps up an investigation into Trump's business activities.

7
jjordan 1 day ago 7 replies      
Say what you want about the politics, but it's inarguable that Comey, whether he wanted to or not, had become a partisan lightning rod for both sides. The unbiased credibility of the FBI was at stake with Comey at the helm, and this is probably a good move for the country.
8
davesque 1 day ago 0 replies      
Mods, please let this one live. This is big news and we can't ignore it. I don't care what the policies are about political stories. I also don't care if I can go somewhere else to read about it. I want to know what _this_ community's opinions are on the matter.
9
Matt3o12_ 1 day ago 4 replies      
Well I would be certainly interested in the circumstances especially considering that I always believed he was pro trump. Some even said he played an important role Trump won the election because he opened an investigation into Clinton's emails right before the election.
10
curiousgal 1 day ago 2 replies      
Flashbacks to Nixon's downfall.
11
colemannugent 1 day ago 2 replies      
A friendly reminder to both sides that whatever the current administration does, the next can undo.

This is especially important when the majority party decides to give itself more power and inadvertently gives their successors more than they intended.

12
rrggrr 1 day ago 1 reply      
Comey's book deal is going to be enormous. His great, great, great grandchildren will be buying Maserati's with the proceeds. He just needs to withstand another six months of testifying on the hill in front of at least two standing committees and probably a special committee.
13
grizzles 1 day ago 1 reply      
He was too much of a wildcard. Trump wants to wrap up the Russia thing and he needs someone who is more subservient to do that.
14
favorited 1 day ago 1 reply      
If I'm not mistaken, he's the first FBI director to be fired.

Edit: I was, in fact, mistaken.

15
satysin 1 day ago 1 reply      
This is going to make an amazing movie in a decade or two.
16
fencepost 1 day ago 0 replies      
Trump just wanted to be sure that Comey's statement last year about the iPhone hack cost was true.

"more than I will make in the remainder of this job, [...]"

17
iamjeff 1 day ago 0 replies      
President Trump cares little about protecting the Office of the President...his administration has a well-documented history of putting the thumb on the scale regarding the investigation of collusion between his campaign and Russian agents/agencies...this is damaging the credibility in the office...this firing was also clearly decided on and then the rationale was secured afterward...it baffles the mind that Trump rationalizes this executive action by claiming that Comey was "mean to Clinton" when only a few days ago Comey had his trust...the reasoning cited, and involvement of Sessions in interfering an investigation that he recused himself from, is bogus... It is not unreasonable to claim that a cover-up is in full swing!
18
tannhauser23 1 day ago 1 reply      
Everyone should read the letter that the Deputy Attorney General wrote to the Attorney General in recommending that Comey be fired. It's brutal: http://apps.washingtonpost.com/g/documents/politics/fbi-dire...

This and Comey's recent misstatements to Congress about Huma Abedin forwarding sensitive emails to Anthony Weiner are alone grounds for Trump to fire Comey. Whether Trump had other motives... I mean, who knows? It's all speculation.

19
jacquesm 1 day ago 7 replies      
Someone better than me in English, please explain the meaning of the word 'recuse'?
20
hota_mazi 1 day ago 2 replies      
Trump is soon going to run out of people to fire.
21
wonder_bread 1 day ago 0 replies      
Which can only mean something else happened today that Trump's covering up in the headlines by firing Comey
22
Hermitian 1 day ago 0 replies      
Why isn't this on the front page?
23
Beltiras 1 day ago 0 replies      
Oh, this has got to burn. The man that gave him the office......
24
newsat13 1 day ago 6 replies      
Can someone clarify if comey is pro trump or not?
25
danielvf 1 day ago 2 replies      
Anyone have a link to the contents of the memo that recommended firing, and contained the reasons for that recommendation?
26
thrillgore 1 day ago 0 replies      
At this point we should demand an immediate Impeachment.
27
romeisburning 1 day ago 0 replies      
The thought of Trump nominating an FBI director is bone chilling. Summed up with what's known about Flynn and every other suspicious data point we have what I am increasingly sure that is a modern day coup of the USA.

Time to pause tech and effect change, this is leading to a future darker than I can possibly contemplate.

28
AnimalMuppet 1 day ago 0 replies      
Mr. Comey just acquired a badge of honor. No, I'm not being sarcastic. It's getting to the point where being fired is more honorable than remaining.
29
wtf_is_up 1 day ago 0 replies      
It's about time. Comey has politicized the FBI in ways that have damaged its reputation for years to come.
30
mtgx 1 day ago 6 replies      
Hopefully there's a silver lining and that this means the encryption backdoor push (led by Comey) will slow to a crawl or be forgotten. He was already preparing a push for FISA Amendments renewal together with Dianne Feinstein (who is apparently having a change of heart about her own retirement).
31
Shivetya 1 day ago 3 replies      
Trump had to dismiss Comey. Comey damaged the FBI in his recent sessions with Congress to the point the FBI was on the defensive trying to set the record right. Considering the erratic behavior with both the Clinton and Russia issues it is doubtful that Comey was capable of continuing in such an office.

Like or dislike Trump, there have been many on the Democratic Party side calling for Comey to be gone and the odd part is many are now rushing to the guy's defense. That and he was fired over incorrect testimony about a Clinton aide, testimony that painted her in a worse position than deserved.

Irrational is the best way to describe the reaction of many. I was really shocked by some in the press, it is near impossible to separate journalist from opinion editors when they cannot separate the roles themselves

32
bingomad123 1 day ago 0 replies      
Why are we discussing politics on HN ?
33
whistlerbrk 1 day ago 0 replies      
It's time for this dictator to be impeached. People need to start marching on Washington.
34
hsnewman 1 day ago 0 replies      
Christi will be appointed FBI director, and Comey will get a nice job in the Trump organization for falling on the sword.
11
SQL Notebook sqlnotebook.com
431 points by mmsimanga  1 day ago   101 comments top 20
1
electroly 1 day ago 10 replies      
Hello everyone! Author here. I didn't expect anyone to find this repo, much less post it on Hacker News!

This project is inactive for two main reasons:

- SQLite is not a great general-purpose SQL engine. Poor performance of joins is a serious problem that I couldn't solve. The virtual table support is good but not quite good enough; not enough parts of the query are pushed down into the virtual table interface to permit efficient querying of remote tables. Many "ALTER" features are not implemented in SQLite which is a tough sell for experimental data manipulation.

- T-SQL, the procedural language I chose to implement atop SQLite, is not a great general-purpose programming language. Using C# in LINQpad is a more pleasant experience for experimentally messing around with data. R Studio is a good option if you need statistical functions.

I think several good solutions in this problem space exist. A local install of SQL Server Express can be linked to remote servers, allowing you to join local tables to remote ones. That setup serves nearly all of SQL Notebook's use cases better than SQL Notebook does. LINQpad is also very convenient for a lot of use cases.

I appreciate the interest! I may spin off the import/export functionality into its own app someday, as I had a lot of plans in that area, but I think SQL Notebook as it stands is a bit too flawed to develop fully.

2
bobochan 1 day ago 3 replies      
This looks very interesting.

I recently had to teach a series of workshops on SQL and I was trying to figure out the best system to allow students to independently work with small datasets without having to install any software. I found Alon Zakai's absolutely fantastic version of SQLite in JavaScript here:

https://github.com/kripken/sql.js

I coupled that library with a CodeMirror editor and got a working web based environment very quickly.

3
lima 1 day ago 1 reply      
Jupyter/IPython + https://github.com/catherinedevlin/ipython-sql is a wonderful workflow for interactive DB exploration.
4
nrjames 1 day ago 1 reply      
I generally use the Firefox SQLite Manager extension when I need to explore SQLite databases. It serves its purpose pretty well, though it has some annoyances and UI quirks. https://addons.mozilla.org/en-US/firefox/addon/sqlite-manage...
5
TeMPOraL 1 day ago 0 replies      
Ouch, that would be very useful to me had I known about it two months ago, when I was exploring the database dump from my old Wordpress blog (I'm finalizing the process of re-launching it as a static site). I managed though, by combination of MySQL Workbench and Common Lisp REPL.

Anyway, bookmarking for the next time I'll need to play with relational data.

6
probdist 1 day ago 1 reply      
Looks pretty neat. Reminds me a bit of Linqpad, https://www.linqpad.net/ which I've also never used.
7
grouseway 1 day ago 0 replies      
Neat.

-How about import from clipboard (useful for cut and paste from excel)

-It doesn't seem to recognize tab delimiters in a .txt file. Maybe the import window should have a delimiter selector?

-Does it have a crosstab/pivot tool? Most sql dialects are lacking here because they make you explicitly define crosstab columns which is a pain for exploration work.

8
yread 1 day ago 0 replies      
Hmm looks nice but last commit was 8 months ago https://github.com/electroly/sqlnotebook
9
ckdarby 1 day ago 6 replies      
Can't exactly see the value this brings that Apache Zeppelin doesn't already offer.

https://zeppelin.apache.org/

10
stared 1 day ago 0 replies      
For having R in notebooks (similar to Jupyter Notebooks) I really recommend http://rmarkdown.rstudio.com/authoring_knitr_engines.html.

As a side benefit, it is easy to ggplot results. :)

11
agentultra 1 day ago 0 replies      
I've always wanted a nice SQL-oriented "notebook" type of application.

I get something of this experience in Emacs via `org-mode`, `sql-mode`, and `ob-sql-mode` minus the data-importing functionality... though with babel it's probably doable in a code block using a script.

Bonus: org-mode lets you export to many formats which makes sharing results quite easy.

12
carlosgg 1 day ago 0 replies      
I will check it out. You can also use R notebooks to embed SQL code in notebook format.

https://blog.rstudio.org/2016/10/05/r-notebooks/(scroll down to "Batteries included")

I was playing around a bit with it:

https://carlosror.github.io/baseball_mysql/

13
educar 1 day ago 0 replies      
Very nice, I have been using https://addons.mozilla.org/en-US/firefox/addon/sqlite-manage... so far. Looks like this can replace it.
14
Dnguyen 1 day ago 1 reply      
In my daily work I often have the need to analyze excel and csv files from clients. I use http://harelba.github.io/q/ and it worked most of the time. But this one seems promising. Especially being able to query data from a file and join with data from a database.
15
daveorzach 1 day ago 1 reply      
Is there any Windows SQL software that can use system/machine ODBC data sources? My company uses OpenLink's ODBC drivers to access our main database (Progress OpenEdge.) I have no problem using Python, Pandas, and pyodb to connect to the data base but it isn't the best environment to develop queries.
16
krylon 1 day ago 0 replies      
At work, I routinely have a copy of SQL Server Management Studio open for the odd ad-hoc query I need to run against our ERP system's database.

This tool looks like it might be a useful replacement for this purpose, especially if it can handle CSV data, as well.

17
bognition 1 day ago 1 reply      
Windows only is a shame, nearly all devs I know use OSX or linux.
18
kencausey 1 day ago 0 replies      
Anyone else understand what they are referring to in the Getting Started notebook about a 'CREATE menu'? I don't see it anywhere.
19
bendykstra 1 day ago 0 replies      
I'm curious why it is not possible to import data from an SQLite file.
20
iagovar 1 day ago 0 replies      
Would this app be nice for a beginner with DB's for data analysis?
12
Beware of Transparent Pixels adriancourreges.com
359 points by tsemple  9 hours ago   58 comments top 13
1
dahart 6 hours ago 1 reply      
Really nice article! Succinctly demonstrates the problem with not using premultiplied alpha.

> As an Artist: Make it Bleed!

> If youre in charge of producing the asset, be defensive and dont trust the programmers or the engine down the line.

If you are an artist working with programmers that can fix the engine, your absolute first choice should be to ask them to fix the blending so they convert your non-premultiplied images into premultiplied images before rendering them!

Do not start bleeding your mattes manually if you have any say in the matter at all, that doesn't solve the whole problem, and it sets you up for future pain. The only right answer is for the programmers to use premultiplied images. What if someone decides to blur your bled transparent image? It will break. (And there are multiple valid reasons this might happen without your input.)

Even if you have no control over the engine, file a bug report. But in that case, go ahead and bleed your transparent images manually & do whatever you have to, to get your work done.

Eric Haines wrote a more technical piece on this problem that elaborates on the other issues besides halo-ing:

http://www.realtimerendering.com/blog/gpus-prefer-premultipl...

2
tantalor 8 hours ago 3 replies      
Reminds me of "Is there a reason Hillary Clinton's logo has hidden notches?"https://graphicdesign.stackexchange.com/questions/73601/is-t...
3
dvt 8 hours ago 2 replies      
> Even with an alpha of 0, a pixel still has some RGB color value associated with it.

Wish the article was more clear as to why this happens. Let me elucidate: this happens because, per the PNG standard[0], 0-alpha pixels have their color technically undefined. This means that image editors can use these values (e.g. XX XX XX 00) for whatever -- generally some way of optimizing, or, more often than not, just garbage. There are ways to get around this by using an actual alpha channel in Photoshop[1], or by using certain flags in imagemagick[2].

[0] https://www.w3.org/TR/PNG/

[1] https://feedback.photoshop.com/photoshop_family/topics/png-t...

[2] http://www.imagemagick.org/discourse-server/viewtopic.php?t=...

4
fnayr 7 hours ago 2 replies      
This is extremely useful to take advantage of (that you can store RGB values in 0-alpha pixels). I've written some pretty simple but powerful shaders for a game I'm working on by utilizing transparent pixels' "extra storage" which allowed for either neat visuals or greatly reduced the number of images required to achieve a certain affect. For instance, I wrote a shader for a characters hair that had source images colorized in pure R, G, and B and then mapped those to a set of three colors defining a "hair color" (e.g. R=dark brown, G=light brown, B=brown). If I didn't have the transparent pixels storing rgb nonzero values, the blending between pixels within the image would jagged and the approach would have been unacceptable for production quality leading to each hair style being exported in each hair color. As a total side note I really enjoyed the markup on the website. Seeing the matrices colored to represent their component color value is really helpful for understanding. Nice job author!
5
modeless 7 hours ago 2 replies      
I don't like this article because it blames the wrong people and buries the real solution, premultiplied alpha, at the bottom. Already there are many comments here that are confused because they didn't even see the premultiplied alpha part of the article.

The issue with the Limbo logo was not that the source image was incorrect. The image was fine. The blending was incorrect because the PS3 XMB has a bug. Not using premultiplied alpha when you are doing texture filtering is a bug.

6
jamesbowman 8 hours ago 1 reply      
Using premultiplied alpha avoids this. Jim Blinn's books from the 90s give a very thoughtful treatment of the topic.
7
jayshua 5 hours ago 2 replies      
While reading this article, it struck me that the amount of "useless" data increases as the alpha value approaches 0. For example: in a pixel with rgba values of (1.0, 0.4, 0.5, 0.0), the rgb values are redundant. Is there a color format that would prevent this redundancy? Perhaps by some clever equation that incorporates the alpha values into the rgb values? I don't think Premultiplied alpha would work, because you still need to store the alpha value for compositing later...
8
Kiro 8 hours ago 6 replies      
> pay attention to what color you put inside the transparent pixels

I don't understand this. When I make transparency I don't use any color? I use the Eraser tool or Ctrl-X, not a color with 0 opacity.

9
panic 7 hours ago 0 replies      
Premultiplied alpha is also more "correct" in that it separates how much each pixel covers things behind it (the alpha value) from the amount of light it is reflecting or emitting (the color values). These two values should really be interpolated separately, and that's what premultiplied alpha gives you.
10
VikingCoder 7 hours ago 4 replies      
Premultiplied alpha results in less color depth, though. If my alpha is 10%, then my possible RGB values become 0-25. Even if I multiply by 10, I still lose the maximum possible values 251-255, and only values 0, 10, 20, 30... 250, are possible.

The correct solution is to pay close attention to all of the factors... and to be ESPECIALLY aware of pixel scaling. Provide your RGBA textures at the 1:1 pixel scale they will be rendered (or higher!) if at all possible.

11
Kenji 7 hours ago 1 reply      
You also have a similar problem when you render opaque, rectangular images without the clamp edge mode, and the renderer is in tiling mode, so the borders wrap around when your picture is halfway between pixels and become a mix between the top/bottom or left/right colour, corrupting the edges. Easy to fix, but annoying until you get what it is that corrupts your edges.

Also: "The original color can still be retrieved easily: dividing by alpha will reverse the transformation."

C'mon, you can't say that and then make an example with alpha=0. Do you want me to divide by zero? The ability to store values in completely transparent pixels is lost.

12
ninjakeyboard 8 hours ago 0 replies      
s/sawn/swan/
13
xchip 7 hours ago 0 replies      
DR;TL: use premultiplied alpha for transparency
13
BBR, the new kid on the TCP block apnic.net
349 points by pjf  1 day ago   45 comments top 12
1
notacoward 1 day ago 3 replies      
It's almost irresponsible to write an article on this topic in 2017 without explicitly mentioning bufferbloat or network-scheduling algorithms like CoDel designed to address it. If you really want to understand this article, read up on those first.

https://en.wikipedia.org/wiki/CoDel

2
brutuscat 1 day ago 0 replies      
First saw it at the morning paper: https://blog.acolyer.org/2017/03/31/bbr-congestion-based-con...

This is the story of how members of Googles make-tcp-fast project developed and deployed a new congestion control algorithm for TCP called BBR (for Bandwidth Bottleneck and Round-trip propagation time), leading to 2-25x throughput improvement over the previous loss-based congestion control CUBIC algorithm.

3
netheril96 1 day ago 0 replies      
Network performance across national borders within China has been abysmal since the censorship got much more serious. BBR seems promising, so more and more people (that includes me) who bypass GFW with their own VPS has been deploying BBR, and seen marvelous results.
4
huhtenberg 1 day ago 2 replies      
Any data on BBR vs Reno and Vegas sharing?

Link capacity estimation is easy. It's the co-existing gracefully with all other flow control options that's tricky.

5
emmelaich 1 day ago 0 replies      
This article is not only a great intro to BBR, but an excellent introduction the history of flow control.

Congrats to Geoff and his team.

6
skyde 10 hours ago 0 replies      
Would adding this only to the http reverse proxy machines provide most of the benefit without have to patch all servers.

This seem to have the greatest effect over wan links.

7
abainbridge 1 day ago 1 reply      
Not be confused with BBR enhancing the Mazda MX-5:https://www.pistonheads.com/news/ph-japanesecars/mazda-mx-5-...

Also significantly reduces latency and increases throughput :-)

8
skyde 1 day ago 4 replies      
How can we use it today! Is it in Linux code already and easy to enable ?
9
emmelaich 1 day ago 0 replies      
> ... the startup procedure must rapidly converge to the available bandwidth irrespective of its capacity

It seems to me that you'd be able to make a rough guesstimate by noting the ip address; whether it's on the same LAN, or continent/AS.

It wouldn't matter if you got it very wrong as long as you converged quickly to a better one (as you have to do anyway)

10
kstenerud 1 day ago 0 replies      
It seems like the best way to handle this situation is to assume that all other algorithms are hostile, and to seize as much bandwidth as you can without causing queue delay. That would reduce the problem set to a basic resource competition problem, which could then be solved with genetic algorithms.
11
raldi 1 day ago 0 replies      
For those dying to know what it stands for: Bottleneck Bandwidth and Round-trip time
12
gritzko 1 day ago 0 replies      
Sounds like they adapted LEDBAT delay measuring tricks.
14
Maintainers make the world go round: Innovation is an overrated ideology projectm-online.com
368 points by ForHackernews  2 days ago   167 comments top 27
1
wjke2i9 1 day ago 1 reply      
> Ive actually felt slightly uncomfortable at TED for the last two days, because theres a lot of vision going on, right? And I am not a visionary. I do not have a five-year plan. Im an engineer. And I think its really I mean Im perfectly happy with all the people who are walking around and just staring at the clouds and looking at the stars and saying, I want to go there. But Im looking at the ground, and I want to fix the pothole thats right in front of me before I fall in. This is the kind of person I am. - Linus Torvalds @TED[1]

[1] https://www.ted.com/talks/linus_torvalds_the_mind_behind_lin...

2
whack 2 days ago 5 replies      
You could say the same thing about any non-glamorous/lucrative position.

"Garbagemen make the world go round. Without them, we would drown in our own filth"

"Nannies make the world go round. Without them, half the workforce would be stuck at home"

"Auto mechanics make the world go round. Without them, we would have no way of getting places"

Ultimately, all such arguments are inane and pointless because every single job that exists in society

A) is important to the people paying for it

B) has wages that are based on both the importance of the job and how easy it is to find someone capable of doing it

C) The idea of glamorizing any job, and allowing yourself to be influenced by a job's glamor-rating, is just superficial drivel. Don't judge yourself or others by the job listed on their business card. If you feel the need to judge someone at all, judge them by the impact that they, as an individual, are making in the world.

3
Animats 2 days ago 5 replies      
"If the President had picked me to predict which country [in postwar Europe] would recover first, I would say, 'Bring me the records of maintenance.' The nation with the best maintenance will recover first. Maintenance is something very, very specifically Western. They haven't got it in Russia. If I got in there in the warehouse, let's say, and I saw that the broom had a special nail, I would say, 'This is the nail of immortality.'" - Eric Hoffer
4
draw_down 2 days ago 14 replies      
You might say I've observed this many times in my career. I think the best move career-wise is to be one of the people who makes the new thing. Those who clean up after them, bear the brunt of their design flaws and careless mistakes, will never be recognized, appreciated, or remunerated as well. At least in my experience.

Ask yourself this: who is the most famous maintainer you can think of? (Not someone who devised an innovation and then maintained it - pure maintenance)

5
maehwasu 2 days ago 3 replies      
This is like saying LeBron James is less important than Cleveland's role players, because you need five people on a side to have a basketball team.

The question isn't who's "necessary", since everyone who is necessary, no matter in what way, is necessary; necessity is a tautology.

The question is whose contributions are more replaceable.

6
shriphani 2 days ago 0 replies      
There is probably a distribution here that matters - without maintenance there is no foundation for innovation, without innovation there is no motivation to maintain - man wants to produce and consume newer ideas, materials, tools, items etc.

There's a great piece in the Lapham's quarterly about maintaining NYC's infrastructure and how without any maintenance, NYC would be replaced by forest cover within 200 years. Can't find it right now but it is a great read trust me.

7
rusk 1 day ago 0 replies      
This reminds me of a great quote, attributed to Thomas Eddison:

"Opportunity is missed by most people because it is dressed in overalls and looks like work"

https://www.brainyquote.com/quotes/quotes/t/thomasaed104931....

8
spc476 1 day ago 0 replies      
You know, if Hollywood made the blockbuster movie "Infrastructure" [1], maintenance might be views as a "good thing."

[1] https://www.youtube.com/watch?v=Wpzvaqypav8&t=17m14s

9
theprop 2 days ago 1 reply      
No, it's precisely the opposite! Maintainers (using whale oil for energy) would have driven every whale to extinction and left the world without an energy source a century ago. Maintainers (using horses for travel) would have drowned Manhattan in horse shit a century ago.

Innovation is if anything under-rated and under-funded and under-supported. The homes of hundreds of millions of people and energy itself is threatened by depleting fossil fuels and global warming...and some of the major efforts to stop this have depended on effectively "insane" entrepreneurs like Elon Musk...not a smart system! All the while hundreds of billions of dollars in health care costs for just say unnecessary tests flows to negative value addition maintainers.

Maintainers mostly either conservatively follow & accept or exploit the current system. It's innovators who've driven down the cost of lighting your home to a few hours of income or the ubiquity & cheapness of books & information (perhaps to the detriment of wisdom but that's another story) to stopping war through protest to ending non-man-made famine.

10
mentat 2 days ago 1 reply      
On a lighter note, Pump 6 from "Pump 6 and Other Stories" (https://smile.amazon.com/dp/B0071CX7V4/) is a fun take on a world that has become too good at making things that don't need maintenance.
11
kbutler 2 days ago 0 replies      
Innovation changes the world, maintenance keeps it going.
12
frabbit 1 day ago 0 replies      
I found the thesis interesting and plausible: that innovation is fetishized.

But on a slight tangent I wondered whether the "innovation" that they complain about is a particular variant, one that we're all familiar with here: a pseudo-libertarian start-up variant

 "Innovation ideology is overvalued, often insubstantial, and preoccupied with well-to-do white guys in a small region of California"
It seems easy to argue against this tired representative of innovation.

By contrast there are those that would argue that most of the major technological and scientific gains have arisen, not from these VC hype-machines, but from large-scale state planning and investment. One of the best expositions of this argument is from economist Mariana Mazzucato: https://www.youtube.com/watch?v=yPvG_fGPvQo

13
shade23 1 day ago 0 replies      
Wouldnt maintenance lead to innovation analogous to necessity being the mother of invention?

Most of the inventions came up as an easier or better way of doing something which was the current maintainer(to be innovator?) decided to rework/create .

And if the author talks about only maintenance where no development can be done(even those that make the life of maintainer easier).IMHO , Maintenance procedures should also be constantly improved and hence that would lead to innovation.

14
upofadown 2 days ago 2 replies      
Obviously the thesis is true. Maintenance is crucial while innovation rarely changes anything substantive...

I think that is the reason that we hold innovators in high regard. People are very bad at it and it happens so infrequently. We rarely get the right person when we hand out credit so such idolization is usually meaningless, but I suppose in the long run that is not important. We have this irrational need to attach a person to the idea.

So we end up failing to properly credit any of the people that make and keep our civilization...

15
cgio 1 day ago 2 replies      
Or as I like to say, "incompetence makes the world go round". Extracting little glimpses of functionality out of a chaotic mess is a challenging, at times satisfying and definitely valuable exercise that keeps many people at work...

Maintenance is not the opposite of innovation, it is the opposite of good design.

16
danielam 1 day ago 0 replies      
There's an analogy here vis-a-vis tradition/progress. In order to be reasonably sure that a change is an improvement, you must understand what you're changing and how. To borrow a Chestertonian example, if you encounter a fence and you don't know why it's there, find out why before you remove it. Maintainers are in the best position to understand the impact of making changes, and because of that, they're able to function as either advisers or as "innovators" by knowing where improvements can be made and having the knowledge to understand why they're improvements.
17
madenine 1 day ago 0 replies      
The maintainers! I know a couple people connected to this group - heard great things about their 2nd conference last month.

The premise is great. From Russel's article on Aeon:

"We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, What is at stake if we move scholarship away from innovation and toward maintenance? Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovations shadow."

18
rdiddly 2 days ago 0 replies      
What about Improvers? Not a word about us? You can innovate while maintaining.
19
acchow 2 days ago 0 replies      
Ridiculous. Maintenance is momentum. Innovation is boost.

Each boost is miniscule, but our momentum is enormous after thousands of years of human development so of course maintaining our momentum gets us incredibly far.

20
pc2g4d 2 days ago 0 replies      
I'd say there's no fine line between maintenance and innovation. Many innovations arise in response to the pains of maintenance.
21
carsongross 2 days ago 5 replies      
Related video by Jordan Peterson, on how liberals and conservatives need one another because liberals (high trait openness) innovate, but conservatives (high trait conscientiousness) maintain things:

https://www.youtube.com/watch?v=3Ho5VZp_ps4

22
deskamess 2 days ago 0 replies      
Does Edisons 1% inspiration (innovation) and 99% perspiration (maintenance) apply?
23
lowbloodsugar 2 days ago 0 replies      
If maintaining these things is important, might we have to wonder how they came to be?
24
sammyo 2 days ago 1 reply      
Maintainers make the world go round, innovators MAKE the world.
25
golergka 1 day ago 0 replies      
OT, but this site looks just great with Javascript turned off (as I usually do with all the "trendy"-looking longreads, as they tend to be processor hogs). Even animations on the title screen. Awesome front-end job.
26
mlindner 2 days ago 0 replies      
I just have to laugh at this title. It's delusional.
27
sebelk 2 days ago 2 replies      
And what about devops? With OpenStack and et al, aren't gurus telling us that maintainance (adminitracin) doesn't exist no more?
15
Amazon Echo Show amazon.com
385 points by metaedge  1 day ago   441 comments top 65
1
eclipxe 1 day ago 6 replies      
People are missing this:

"With the Alexa App, conversations and contacts go where you go. When youre away from home, use the app to make a quick call or send a message to your familys Echo. Alexa calling and messaging is freeto get started download the Alexa App."

Alexa is now in the messaging and communication game.

https://techcrunch.com/2017/05/09/amazon-enables-free-calls-...

2
pwaivers 1 day ago 3 replies      
A few thoughts:

- This is way less creepy-looking than the Amazon Look (https://www.amazon.com/Echo-Hands-Free-Camera-Style-Assistan...), but it is actually very similar.

- It is great to add a screen to the Echo. Just more feedback on interacting with it, and possibility to watch YouTube, Netflix, etc. casually.

- It doesn't have the same cool minimalism as the Echo. The Echo sits on my counter and looks nice when not in use. I think this one looks much clunkier.

- I definitely want to try one.

3
mholmes680 1 day ago 5 replies      
Its interesting to see how fast Amazon can come to market with these new hardware pieces. I guess the fallout of the Amazon Phone at least had some lessons learned in hardware suppliers, etc... I realize they're throwing hardware out there prior to seeing what the software can do with it, but I think its necessary to get people locked in.

I like their approach from the business perspective. Give the people a voice controlled speaker. Give them a remote! Now, give them a voice-controlled camera! Now, give them a voice-controlled screen! Soon, give them <insert novel sensor> and let them go hands free! Rinse-repeat.

4
silvanojr 1 day ago 4 replies      
I was battling back and forth FOR A MONTH with their skill certification approval team for a skill update that would allow customers to call people by name, where in the first version it was only by phone number.

They would fail the certification because apparently people didn't know how to test, or used fake numbers to make phone calls and complained the call would not connect, or the certificate validation (that was working before) would fail, etc. All sorts of things. VERY frustrating process. I wouldn't make any change, submit the skill again for certification and get different results.

Now they announce their own calling feature, a week after finally approving our update.

5
justcommenting 1 day ago 2 replies      
The Amazon Echo Show seems very much like a telescreen, straight out of Orwell's 1984: https://en.wikipedia.org/wiki/Telescreen
6
verytrivial 1 day ago 3 replies      
I must be one of those old farts who prefers privacy over convenience.

I do not want what amounts to an always-on black-box surveillance device in my home and I simply do not understand why other people think it is okay. I honestly don't.

Down with this sort of thing!

7
ejtek 1 day ago 3 replies      
It continues to surprise me how far ahead of them Apple is letting Amazon/Google get in this area. I've always been a big fan of Apple (despite their closed ecosystem), but have to admit that Amazon is seriously outplaying them on this front. Hopefully Apple surprises me and comes up with something even more innovative that can compete.
8
FLGMwt 1 day ago 15 replies      
Any echo owners feel like they would get additional value out of this?

90% of my interaction with my standard echo has been "what's the weather".

Even when I want visual controls for music, I'd rather pull out my phone than walk over to a screen.

9
colemannugent 1 day ago 6 replies      
I feel like this entire product could be a Chromecast-esque dongle that connects to a TV. Having a personal dashboard would actually be quite useful, but this seems like they want to sell appliances not experiences.

Maybe they've gone with this form factor because of the 2x 2" speakers? But why would I want that when it could be plugged directly into my home audio setup?

Or maybe it's so they can include a touchscreen? But I thought the whole point was hands-free conversational interaction?

I guess I'm missing the point of this. Why would I, as a normal consumer, get this instead of a regular Amazon Echo?

10
cphoover 1 day ago 0 replies      
People here are really missing the point... This isn't another ipad it's a different way of interacting. It's not just video message either, it's a new human interface for interacting with software. You can communicate with someone and get suggestions at the same time. Think conversing with a friend and having Alexa aid in the discussion.

Friend 1: Where do you want to go to the movies tonight? ..Friend 2: I dunno Alexa have any good suggestions?Alexa: Star Trek is playing x:00 at X theatre. Things of this nature.

11
imartin2k 1 day ago 2 replies      
Maybe that's just me, but based on the photos, this device looks quite ugly - which matters for a gadget that people put inside their homes, doesn't it? The "original" Echo has a futuristic design. This one feels more like created in 70s or 80s.
12
danso 1 day ago 1 reply      
I'm not willing or interested enough to enable voice activation (Siri) on my phone or desktop, but thought Echo would be nice to have as a music player. The voice recognition is so reliable -- not just the NLP, but the mic array (unlike trying to activate Siri on the iPhone) -- that it's converted me to a true believer in voice interfaces, at least for simple tasks, such as playing music, turning on NPR, and activating timers and alarms. I do have the Fire stick connected to a projector but I've definitely longed for the ability to navigate YouTube or HBO on a tablet-like device with Alexa (again, not just the NLP, but the mic array, which Fire tablets don't have)

This seems like a nice step in that direction but I've been spoiled by the low cost of the Echo Dot, which when it's on sale is so cheap it can be a stocking stuffer. I don't think I could pay $229 for the first generation version of the Show, but will likely get its cheaper, more advanced iterations.

13
noonespecial 1 day ago 3 replies      
Why does it have to be a tiny self contained screen? Until I can say "Alexa, on the main view screen" (right after "Alexa, Earl Grey, hot" of course), we've got progress to make.

Which reminds me, I've got a Keurig to hack...

14
tetrep 1 day ago 6 replies      
I don't see the value in this over a tablet with a stand. The tablet is portable, can do more things, and already exists in many people's homes.
15
voltagex_ 1 day ago 1 reply      
It's the Chumby for 2017, with less freedom to hack.
16
UXCODE 1 day ago 4 replies      
In the United States, what is the need for speech recognition devices? At least in Japan and China, speech recognition technology does not reach practical level and needs are small.
17
yalogin 1 day ago 5 replies      
Great now the echo will record all video as well and "anonymize" it and use it to improve their systems. This class of devices are the most puzzling to me. People know their value proposition is to record everything but then keep buying them. I keep waiting for the day when the scales tip in favor of privacy but that never happens.
18
test6554 1 day ago 0 replies      
If they are going to enable calling, I sincerely hope they learn from the current phone spam and email spam mess and don't let just anyone call you at any time.

Ideally, you could authorize people to call you by giving each person/entity a different token that authorizes them to call you. Then if that person/entity sells the token to 3rd parties, you not only know who sold you out, but also you have the ability to revoke that token easily.

19
trapperkeeper79 1 day ago 3 replies      
Amazon is killing it in IoT/Smart Home. However, IMHO, they are making a bit of mistake by not allowing developers to monetize their platform (at least the last time I checked). There were also certain device functions that apps could not utilize (e.g. programmatically mute and unmute). I suspect they'll have a wall garden approach to their new Echo devices too ... if this was open, they'd win it all (again, just my opinion).
20
dharma1 1 day ago 0 replies      
The main thing that annoys me about Echo is that the knowledge graph is so poor. I can only choose from a limited amount of things to ask the damn thing, WikiPedia or start installing 3rd party skills.

I wish I could install OK Google on Echo.

Edit - looks like you can, with a custom skill - https://www.youtube.com/watch?v=PR-LVPMU7F4

21
dafty4 1 day ago 0 replies      
Brushed aluminum or some other color scheme would look better. Plastic black matte looks cheap and meh.
22
sergiotapia 1 day ago 0 replies      
Looks like something out of Robocop or Total Recall. I'm not sure if I'm excited or terrified! Let's say both.
23
hungtraan 1 day ago 0 replies      
I honestly think that with the use cases the Echo Show would be much more useful had the static structure has a rotating base, which allows the Echo Show to rotate to the source of voice command (disable-able via setting for privacy concern). That would allow ultimate versatile use for its screen to offer the same hands-free experience.
24
rrggrr 1 day ago 0 replies      
This was the direction I expected Apple to take prior to Jobs passing. It seemed the rumored Apple TV would combine Siri with traditional television. Apple faces serious threats across the entertainment spectrum, from content to device.

Everyone speculating on Apple acquisitions should be considering a Sony or LG buyout. I own stock in neither.

25
JimmyAustin 1 day ago 1 reply      
Interesting choice going with an x86 chip. This could potentially be a hackers dream if you got Linux running on it.
26
wppick 1 day ago 0 replies      
Eventually, with the internet of things, there will need to be a "home brain" type device to control all of the devices in your house. The company that holds that position of controlling what devices can work with others will have a lot of market power.
27
GrumpyNl 1 day ago 2 replies      
Why would you wanna have a electronic spy at home is beyond me.
28
chaostheory 1 day ago 0 replies      
It would be nice to control FireTV with an Echo. Still waiting.
29
malchow 1 day ago 1 reply      
What does a Sonos user do when he has already deployed a dozen Sonos speakers throughout his house? Will there ever be a microphone-only Echo device that can link into a Sonos system?
30
rtechnologies 1 day ago 0 replies      
I developed this same thing 6 months ago. Setup and commands are a bit cumbersome due to being 3rd party but all you need is an Echo device and Android device with the Echo Sidekick app. Does everything the Show does except voice calls but you can send messages through Echo devices to other devices with the SideKick app. https://play.google.com/store/apps/details?id=com.renovotech...
31
kayoone 1 day ago 0 replies      
Love the concept of the Echo, however i don't see too much value in a screen, as for most tasks you'd need that for it's usually worth the effort to pull out the phone since you are also not bound to a specific location.
32
mycodebreaks 1 day ago 0 replies      
why should a Fire tablet paired with speakers not be able to do this?

I am not against any category of products, but as a person who likes to own and manage fewer devices, I like my devices to be versatile.

33
vthallam 1 day ago 1 reply      
This is more like an iPad with a better Siri. I guess talking to parents, watching child cams are the target audience for this. A device which sits in living room or bed room need not show me CNN in there.
34
cturitzin 1 day ago 0 replies      
This is exciting for healthcare use-cases. Simple stuff like video clinician checkups or remote monitoring such as tracking and recording physical therapy progress.
35
amelius 1 day ago 0 replies      
> If you want to limit your interaction with Alexa, simply turn the mic/camera button off.

Of course, that button is a handy indicator for Amazon to know when to record stuff :)

36
pound 1 day ago 0 replies      
Now they're much closer to solving 'smart home assistant' online shopping. Communication only via voice results in two uncomfortable options: either you're blindly believing that you'll get best price/option ("order xyz") or you may stuck in very slow listening of options (try to read search result list). That barrier will be stepped over with this little screen enhancing shopping experience, if needed.
37
relyks 1 day ago 0 replies      
If you can place multiple of these in a house and use them all together as an a/v intercom system, that'd be by a far killer feature. E.g. you can talk to your child who's in the basement or talk to coworker at another cubicle
38
coding123 1 day ago 1 reply      
They used the same picture of Dad seeing his grandchild like 3 times, they need to push out different pics.
39
scotchio 1 day ago 0 replies      
Love that Amazon is throwing a lot of options out there.

Only wish the outer shell on this one looked a bit nicer / slicker.

Really want an "Alexa" type replacement for smoke detectors. Location seems perfect for speakers / music in a house.

Scary to think that privacy for average consumer is basically dead.

40
kasperset 1 day ago 2 replies      
Looks like a mini tablet? Why cannot tablet be used for the same purpose? Perhaps audio capability?
41
dafty4 1 day ago 0 replies      
Brushed aluminum would look nicer. Plastic black matte is bleh.
42
LeoNatan25 1 day ago 0 replies      
Its amazing how much of a difference a marketing video makes. This and the Echo Look are not at all that dissimilar, yet one appears to be friendly and essential, while the other is creepy as hell.
43
davidcollantes 1 day ago 1 reply      
From a user's perspective, I think there are too many Echoes. It makes it hard to decide which to get, especially for those who can only afford (or want to deal with) one. Too much fractioning.
44
vineet 1 day ago 2 replies      
The video calling capability seems especially neat - I wonder if they will interoperate with Facetime, Google Duo/Hangouts, and other video calling protocols. It will make our lives so much easier.
45
PascLeRasc 1 day ago 1 reply      
Are they just "announcing" these devices by putting them up for sale? It feels like we need an Echo keynote to learn about their direction and they could get a lot more hype that way.
46
jlebrech 1 day ago 0 replies      
I could see the use in the kitchen as ask alexa to look up recipes or turn the page while my hands are greasy or covered in flour.

this functionality will probably need a custom firmware tho.

47
MarketingJason 1 day ago 0 replies      
IMO Amazon should focus on enabling and assisting the development of more skills and integrations for echo devices before pushing out newer models or adding features.
48
archeantus 1 day ago 0 replies      
Looks great. But how about that mural?? The main takeaway I had from that video is I need to pick up sponge painting in my kid's rooms.
49
themtutty 1 day ago 0 replies      
Their demo video is cringe-worthy. I understand that you're also marketing to non-technical folks, but it's like a film from grade school.
50
slackoverflower 1 day ago 4 replies      
What is Amazon's long term strategy with all these devices with the main feature still being voice?
51
Kiro 1 day ago 0 replies      
Perfect for viewing and browsing recipes and recipe videos without having to touch the screen.
52
Animats 1 day ago 0 replies      
Not only can you watch it - it watches you!
53
CreepyGuy101 1 day ago 0 replies      
I have to ask why these things aren't gesture activated ...
54
pateldeependra 1 day ago 0 replies      
This is similar to a tablet kept in my room.
55
agumonkey 1 day ago 0 replies      
Kinda merging tablet/webcam + alarm clock usage. Not bad.
56
pmcpinto 1 day ago 0 replies      
So this is kind of a tablet, but with voice as main UI
57
gcb0 1 day ago 0 replies      
so amazon is trying to corner the market of tablets-junior-cant-take-to-the-restroom market?
58
mandeepj 1 day ago 1 reply      
A better alternate to Sony dash which got abandoned
59
salimmadjd 1 day ago 0 replies      
iPhone for grandparents? or Echo for grandparents.

For me this product makes sense for elderly in the digital age to keep them connected.

60
kensai 1 day ago 0 replies      
"Alexa, submit this comment to HN"

It works! :D

61
staz 1 day ago 2 replies      
"Alexa, show me the kids' room."

Am I the only one that's creeped up by that?

62
CreepyGuy101 1 day ago 0 replies      
Well, that just got creepy. As if security wasn't an issue before.
63
bettyx1138 1 day ago 0 replies      
the video seems like a parody
64
uptown 1 day ago 5 replies      
The telescreen received and transmitted simultaneously. Any sound Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug into your wire whenever they wanted to.

-Orwell, 1984

65
ganfortran 1 day ago 1 reply      
Here is a crazy idea: What didn't Amazon make this its own Nintendo Switch? A stand with a detachable tablet? Won't this be even better?
16
A crashed advertisement reveals logs of a facial recognition system twitter.com
357 points by dmit  5 hours ago   106 comments top 21
1
kimburgess 3 hours ago 8 replies      
You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.

Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.

2
anabis 3 hours ago 4 replies      
In Japan at least, before automated facial recognition, cashiers recorded buyer demographics by hand. I would think other places do it too.

Edit: Here is what the buttons look like. Gender and age.https://image.slidesharecdn.com/hvc-c-android-prototype20141...

3
samtho 2 hours ago 0 replies      
During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.

Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).

Next time you look at digital signage, just be aware that it is probably looking back at you.

4
cyberferret 4 hours ago 1 reply      
Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?
5
Smushman 20 minutes ago 0 replies      
Original Reddit post with background:

https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...

"...peppes pizza in Oslo S."

6
korethr 3 hours ago 2 replies      
Though not at the level depicted in the movie, I am nonetheless reminded of Minority Report.
7
kirykl 17 minutes ago 0 replies      
With this much personal care to really know their customers by face I'm sure they put just as much personal care into the quality and craft of the product /s
8
pavement 2 hours ago 2 replies      
Okay, time to start wearing ski masks and Santa Claus costumes in public at all times.
9
spsful 2 hours ago 0 replies      
Oh look, Ghostery's product pitch comes in handy nowhttps://www.youtube.com/watch?v=EKzyifAvC_U
10
DonHopkins 3 hours ago 0 replies      
There should be one of these in front to the ad to give passers by more privacy:

http://hackaday.com/2010/10/15/window-curtain-moves-to-scree...

11
nateberkopec 3 hours ago 1 reply      
I saw a pitch for this tech 5 years ago. Not sure the name of the company. The idea is they can measure engagement (how long you looked), approximate age and sex.

Five years ago it didn't seem so sinister. A lot has happened since then, I guess.

12
cheetos 1 hour ago 1 reply      
Is this unethical if everything is done locally and no data is stored or resold?
13
ddmma 3 hours ago 0 replies      
Cognitive Services enabled applications, why so complicated? It's like spotted an new hipo into the wild https://azure.microsoft.com/en-us/services/cognitive-service...
14
Cyph0n 4 hours ago 2 replies      
I wonder how accurate these measurements are in practice. They could just be placeholder implementations, right?
15
nom 3 hours ago 1 reply      
Windows.. TeamViewer.. using the primary screen to display the ad.. and the camera is not even hidden..

Amateurs.

I wouldn't be alarmed by this, they probably don't even now the accuracy of the algorithm they are using or how to interpret the collected data correctly.

16
avg_dev 4 hours ago 2 replies      
Well, that's disturbing. I bet we'll see much more of this in the coming days.
17
yegle 3 hours ago 1 reply      
Is this new? Things like Affectiva has been out for a long time.
18
lfender6445 2 hours ago 0 replies      
just so im clear, this is someone in a pizza shop looking at a windows kiosk with a camera?

it would be interesting to see the ad and how / if it changes based on who is watching

19
dghughes 2 hours ago 0 replies      
male young adult

I think they zeroed in on their demographic, good job!

20
Freeboots 2 hours ago 0 replies      
Not many smiles :(
21
gech 2 hours ago 0 replies      
Despicable. Any authors of this work should be publicly shamed and punished. And don't get me started on what should befall the owners of capital that drove this.
17
Opera is Reborn opera.com
305 points by riqbal  20 hours ago   279 comments top 59
1
eis 15 hours ago 3 replies      
The main driver for innovation and growth of the open web was that it is open. The diversity we got from that was tremendous. Anyone could make a website with whatever content they could think of and people all over the world could access it. There was some kind of "web neutrality" in browsers. They didn't prefer one site over the other. The second big boost came from extendability and customizability of browsers through extensions which gave people even more control over what they would consume. And on top of that, the browers became open source, a big win all around.

Opera is going the other way. They have a closed source browser that directly integrates some specific products into the browser. You can't completely get rid of them, you can't integrate another product in the same way and I'd guess an extension can't mess with these pre-packaged addons. It's nice that you can have a messenger in a sidebar next to your current browsing page. But it's taking control away from users and that's not the way forward in an open web. Why can't this be an extension? Why can't it have a feature that lets me run two tabs side by side so I can choose what products to use there?

In general I think this Opera Reborn is still far behind say Firefox or the official Chrome. Just now you can change the filter lists in their Adblocker something that adblocking extensions could do for years. Now you can change Operas theme, i could make Firefox look like whatever I wanted forever. For all the love that the old Opera deserved, I don't think Opera is the future of web browsing.

2
Falkon1313 19 hours ago 4 replies      
>desktops and laptops, while theoretically more powerful multitasking tools, have been left behind

>Browsing and chatting simultaneously is cumbersome and inefficient now, as you need to switch between tabs when responding to a message

Or, since we all have widescreen monitors (and often multiple monitors) you could just have your messengers in a window next to the browser instead of a sidebar within the browser. Seems like a solution looking for a problem. What good is allowing messengers to reside within your browser other than that it lets the people who are tracking your browsing habits simultaneously spy on your messages?

3
lucb1e 19 hours ago 3 replies      
I'd like to try, but I'd probably not end up using it because it's not open source. I feel silly for rejecting a product based on that, but openness is important to me. Here's for hoping they open source it soon!
4
gagabity 20 hours ago 3 replies      
This reminds me of an interesting thing that happened when I uninstalled Opera VPN on my Android. After uninstalling it automatically opened up a browser to one of those "Tell us why you uninstalled" pages that you normally see on the desktop, this shouldn't be possible on Android.

I think they are doing this by having Opera Browser watch for uninstallation of Opera VPN and possibly vice-versa and when one detects the other has been uninstalled it launches the page, clever but annoying.

5
Jedd 17 hours ago 2 replies      
> Social messengers completely changed our lives, by allowing us to work, discover new things and communicate at the same time.

Nope.

> One of its novelties is the ability to seamlessly hop between discovering new content and chatting with friends, or even share online discoveries while browsing.

Happily I know no one that uses, or will use, this new version of Opera - I can imagine few things that would disrupt, annoy, and / or reduce my efficiency (or enjoyment) at my computer than being subject to even more random thoughts from the easily distracted.

6
jug 19 hours ago 5 replies      
I don't dare to use this since Opera got bought by that Chinese consortium of companies I'm not accustomed with. Not sure if I'm overly cautious? I'm thinking of especially Opera Link (which had a recent security breach to boot!). Wouldn't that leave out my passwords to China? And I do want to use some sort of syncing, I'd go crazy without anything there or some iffy third party addon.

It's not even just about a trust issue with Chinese company culture. It's that the Chinese government have ways of getting into even resisting companies at their whim that are judged alright in completely different ways than in Norwegian law. So; even if I trusted Opera Software, even if I trusted this consortium that I don't know, even then...?

7
Kholo 19 hours ago 4 replies      
Hey opera folks who might be reading this - work on simplifying offline access to web content. I want to be able to maintain ~100 GB of offline data store, full text searchable. Wikipedia, stackexchange\stackoverflow, khan academy, zealdocs and whole bunch of other sources of useful browser renderable web content provide dumps of their data.

And then they end up having to build spl apps and extensions and other garbage that work around and hack cross domain/local file policy, just so the content already on my disk can be read and searched by the browser.

When such a treasure trove of web content is accessible offline the browser can and should be the way to access it.

As the web becomes more and more a corporate sponsored attention sink strong offline support can be a valuable browser feature.

8
symmetricsaurus 16 hours ago 1 reply      
Ok, so I installed it to try it out.

Trying to get the WhatsApp widget running it seems like I must allow access to the camera and the microphone on my computer.

This is ridiculous.

On mobile, this is not needed, and is maybe even fine. When the app is turned off you know it isn't listening or taking pictures.

On a desktop computer there is no such guarantee. On the other hand this applies to all software I'm running. Maybe it's time to put a piece of tape over the camera.

9
JimDabell 17 hours ago 3 replies      
Why would I want a messenger application embedded in my web browser? What advantage does that have over simply having it as a separate application?
10
kmfrk 19 hours ago 0 replies      
Opera's been rebooted more times than the Spider-Man movies at this point.
11
Sagane 19 hours ago 1 reply      
I wonder why [.* ]baidu.com, [.* ]facebook.com, [.* ]google.com and [.* ]yandex.com are in the default list of exceptions when blocking ads. Any ideas?
12
romanovcode 19 hours ago 0 replies      
Closed source software owned by the Chinese.

I'm no tinfoil-hat but this combination doesn't really strike as "privacy focused".

13
sprafa 19 hours ago 1 reply      
Opera used to be amazing, a few years ago it had a built in RSS, mail and IRC client, then even later (and right before the acquisition) they were still ahead with their UX by having a "Read Later" section to the browser and allowing you to easily customise and make folders in your "first page".

Now its just a mess and Vivaldi is not much better.

14
LordKano 12 hours ago 1 reply      
I guess it's time for me to be a crotchety old man again.

Opera lost me when the followed Chrome in the elimination of the menu bar. If they haven't rectified that, I'll stick with Firefox.

15
niftich 14 hours ago 1 reply      
Are the FB Messenger, WhatsApp, and Telegram integrations official (i.e. known by, approved by, and/or supported by their respective companies)?

Asking because I can't find a reverse-shoutout to Opera on either Facebook's, WhatsApp's, or Telegram's blog. This is not altogether unusual (especially given the recency of the Opera announcement) but it still makes me curious.

16
aloisdg 20 hours ago 2 replies      
Did they open the Opera's source? They are still more place for more foss browser.
17
T-A 19 hours ago 1 reply      
we bring you Opera Reborn, the first browser to allow messengers to reside within your browser, without the need to install any extensions or apps

https://en.wikipedia.org/wiki/Rockmelt

18
retrac98 20 hours ago 6 replies      
Where to Opera get their money? Building and maintaining a browser seems like an expensive undertaking, and yet despite nobody I know using or even testing their websites on Opera, they've been actively developing and maintaining their browser for years.
19
eps 20 hours ago 2 replies      
So chatting while browsing is the future of desktop browsing? Sounds rather... unambitious.
20
TomyMMX 19 hours ago 4 replies      
How is it possible that Opera got all my saved passwords and sh*t from Chrome?

I did not authorize this during install.

21
steinso 19 hours ago 0 replies      
So the chat program has to be on the left side, what if I want to move it somewhere else?

Perhaps it could reside in a little "window", then it could be moved anywhere! ....wait, this sounds familiar.

/s

22
rcgs 19 hours ago 1 reply      
I think this is the 3rd or 4th time Opera has been reborn in recent memory? I actually quite like where they're going with the user experience. But who owns Opera now?
23
FridgeSeal 19 hours ago 2 replies      
Pretty sure Opera is part owned by a Chinese consortium now, and to my knowledge their revenue comes from ads and adtech.So not super certain how private it is likely to be...
24
arca_vorago 17 hours ago 0 replies      
Until they open source it I will not be touching Opera, despite some of the features in it I really like. The future is open source, and Opera is not.
25
nonsince 19 hours ago 4 replies      
I really like the concept of bringing i3-style tiling management to the browser space, but why is it specific to messengers? Since most well-designed websites work at any size, couldn't you just have a permanent tab at the side of the screen of, say, twitter, or some browser-based IRC client? Admittedly, I'd probably use facebook messenger as this tab anyway, so my needs are covered, but it seems like a missed opportunity. Obviously more technically-inclined individuals on more technically-inclined OS's have this already with proper tiling window managers but Windows and MacOS users probably don't want that system-wide and the browser would be the perfect place to introduce it.
26
forvelin 20 hours ago 1 reply      
Looks nice, but doesn't it look like that Opera is tend to reborn every few years or so ?

Still, I was long time Opera user but so far it didn't feel same after they went for webkit. This seems promising, time to try again.

27
polskibus 19 hours ago 1 reply      
Slightly offtopic, did entire Opera development moved to Poland? Is that part of new strategy under new owner or was it like that for a long time?
28
theprop 19 hours ago 0 replies      
Reborn?! What!

Opera is now a combination of the Epic Privacy Browser and the Vivaldi Browser. They copied Epic's privacy features and in-built VPN/proxy and Vivaldi's social networking sidebars (which incidentally Opera pioneered and Vivaldi's founder is the Opera co-founder).

29
Markoff 8 hours ago 1 reply      
Chinese browser with access to all my data I visit during surfing? No thanks.
30
d--b 14 hours ago 0 replies      
"the future of the browser" and the first 3 features mentioned are "integrated chat", "new color schemes" and "new logos". Uh?
31
fiatjaf 16 hours ago 0 replies      
Why hardcode WhatsApp and Facebook Messenger instead of allowing multiple tabs showing up at the same time -- two of them could be WhatsApp and Facebook Messenger like a tiling window manager?
32
mathw 19 hours ago 0 replies      
Or even better, they could have made the side-by-side feature work for any web pages at all as a more general layout feature, with some kind of easy access toolbar to allow you to pop them in and out as desired.

I think "Reborn" is a bit of a stretch here.

33
znpy 17 hours ago 0 replies      
Considering how much I use the web-browser, how desperate for market share and money Opera is, and the fact that opera whatever it's called now is closed source, i would never even install it.
34
ivanhoe 13 hours ago 0 replies      
These moves that Opera now makes remind me a bit of how back in a day WinAmp didn't know how to fend off the competition, so they've started integrating tones of "extra functionality" out of sheer desperation... it never works
35
felixsanz 14 hours ago 0 replies      
Cool, Opera innovates but Chrome is still the way to go. I want a browser not a social app...
36
toyg 18 hours ago 0 replies      
Funny for this to happen on the same week I switched my default browser from Firefox to Vivaldi [1]. That is really "Opera reborn", since it's built and owned by the original team of Opera developers - which is probably why it's so good.

[1] https://vivaldi.com/

37
xbenjii 18 hours ago 3 replies      
Some shady stuff going on in an injected browser.js file: https://gist.github.com/xbenjii/2048d8edf135b04790be593ee69e...
38
mrmondo 20 hours ago 0 replies      
I wouldn't be so cavalier about promoting VPN as a potential way to improve user security...
39
metehan 15 hours ago 0 replies      
great solution to the the problem not exist
40
neillyons 17 hours ago 1 reply      
I think if Opera had a price that would be a major distinguishing point from the other browsers. Tailor to those users IMHO.
41
debuggerpk 13 hours ago 0 replies      
how does opera makes money now?as per my understanding, pre iOS and android era, opera dealt directly with device manufactures and sold their browser as default on them.Since andriod, how are they even making money now?
42
la_oveja 19 hours ago 1 reply      
Telegram, FB Messenger, Whatsapp... No Slack?
43
ruleabidinguser 19 hours ago 0 replies      
RockMelt 2?
44
hoschicz 17 hours ago 0 replies      
EasyPrivacy by default means blocked Google Analytics on Opera.
45
denzil_correa 18 hours ago 0 replies      
blogs.opera.com throws me a security certificate error on Firefox
46
tiku 19 hours ago 0 replies      
I've used their browser since a year now, because it has better battery saving than Chrome and some other cool features, like detaching video windows, built in VPN. You can now even use all the chrome plugins with another plugin, in Opera.
47
lilbobbytables 15 hours ago 0 replies      
I feel some MSN Explorer nostalgia.
48
flamedoge 17 hours ago 1 reply      
Looks like Edge browser with UWP sidebar
49
9BillionMistake 12 hours ago 0 replies      
On linux desktop can my Opera Tabs be at the top for the screen yet?

No? Oh well, back to chrome.

50
zmix 10 hours ago 0 replies      
Tragic Fail ;-)
51
b0rsuk 15 hours ago 0 replies      
Browser for extraverts.
52
jameskegel 20 hours ago 2 replies      
I was under the impression that the rebirth of Opera Browser was Vivaldi, but it seems I was wrong. Bravo, guys. Nice work.
53
baby 17 hours ago 0 replies      
No tabs on the side :( ?
54
antisthenes 12 hours ago 0 replies      
As a non-techie, this just looks like a closed source Vivaldi clone.

None of the listed features are relevant to me ("Animations", "fresh look and feel", etc.) and my adblock performance is already adequate.

55
JustSomeNobody 16 hours ago 0 replies      
> Browsing and chatting simultaneously is cumbersome and inefficient now, as you need to switch between tabs when responding to a message. We believe this needs to change.

Then:

>If you use more than one messenger, you can easily switch between them by using our shortcut key for quicker access ( + + m on macOS, CTRL + SHIFT + m on Windows and Linux).

So, why would this be any better than switching tabs? One thing about a browser, I usually have my hand on my mouse and not the keyboard.

56
bigbugbag 18 hours ago 1 reply      
Hey this google chrome skin has made some progress since last time I heard about it. It's still light years behind opera 12 and sadly will never regain what it lost.

If you're interested in the actual reborn opera experience, it's vivaldi you're looking for, opera co-founder and many for the original opera team is making vivaldi and though based on blink engine too, hence limited and unable to do exactly what opera used to, it has the same philosophy and vision.

The other alternative is the free software otter[1] browser aiming at recreating the opera 12 experience.

[0]: http://vivaldi.net/[1]: https://otter-browser.org/

57
goosh453 17 hours ago 0 replies      
they should open the source already
58
mtgx 19 hours ago 3 replies      
I gave up using Opera for good once it was acquired by the Chinese.
59
analogmemory 19 hours ago 12 replies      
18
Making a Game in Rust michaelfairley.com
397 points by b1naryth1ef  1 day ago   120 comments top 17
1
vvanders 1 day ago 2 replies      
Great stuff, lines up a lot with what I'd care about as an ex-gamedev and spending a bit of time with Rust. One minor point:

> First class code hot-loading support would be a huge boon for game developers. The majority of game code is not particularly amenable to automated testing, and lots of iteration is done by playing the game itself to observe changes. Ive got something hacked up with dylib reloading, but it requires plenty of per-project boilerplate and some additional shenanigans to disable it in production builds.

Lua is a great fit here and interops with Rust(and just about everything else) very well.

2
deckarep 1 day ago 0 replies      
I largely echo the sentiment of using Rust for game development. The world doesn't need another Flappy Bird clone but that's what I wrote because I ended up porting a Go version that was using SDL originally by Francesc Campoy from the Golang team: https://github.com/campoy/flappy-gopher

I was able to build the Rust version fast, and the SDL library is actually quite usable/stable for most things.

Flappy-Rust has particle effects, the beginnings of parallax-scrolling and basic collision detection. The rust code ended up being pretty reasonable however I'm quite sure there are a few places I could have simplified the sharing of assets.

If anyone is interested in this space check out my repo: https://github.com/deckarep/flappy-rust

Also please see the README.md where I talk a bit more in-depth about how the Rust version differs from the Go version.

Here is a .gif preview of the first iteration of the game: https://github.com/deckarep/flappy-rust/blob/master/flappy-r...

3
Lerc 1 day ago 2 replies      
It would be nice to have a minimal gameish program as a example for people learning Rust.

When I teach kids javascript I start with an etch-a-sketch. It's aided by a simple library to hide the mechanics of the HTML canvas element, context, etc. This allows it to be small enough that they can view it all in one go and build upon it.

 print("Draw with the arrow keys"); var cx=320; var cy=240; function update() { // the arrow keys have key codes 37,38,39 and 40 if (keyIsDown(38)) { cy-=1; } if (keyIsDown(40)) { cy+=1; } if (keyIsDown(37)) { cx-=1; } if (keyIsDown(39)) { cx+=1; } fillCircle(cx,cy,6); } run(update);
There might be merit in writing one of these in every language (and a companion that uses the mouse), maybe placing it on github . With a really simple program like this you can focus on learning the language while making something. It's a tough job figuring out how to learn a language while simultaneoulsy learning how to write the boilerplate needed to get something onscreen.

4
kibwen 1 day ago 3 replies      
I wasn't even aware that you were allowed to ship Rust code in iOS apps, I thought that Apple had a whitelist of allowed languages?

EDIT: And for those interested, you might want to check out Rust's recent inclusion in a AAA title: https://www.reddit.com/r/rust/comments/69s225/rust_makes_it_... :P

5
wyldfire 1 day ago 4 replies      
> The include_* macros are great for packaging. Being able to compile small assets directly into the binary and eschewing run time file loading is fantastic for a small game.

For C/C++/etc devs looking for something similar, BFD objcopy supports an "-I binary". It will emit an object file with _binary_objfile_start, _binary_objfile_end and _binary_objfile_size symbols.

But I have got to say that making it a language feature means that Rust is truly a batteries included language.

6
alkonaut 1 day ago 1 reply      
> I had two or three false-start attempts at learning Rust where Id spend a few hours with it, not get anything done, and put it down until a few months later

This sounds (sort of) encouraging. I was kind of expecting to learn it by putting in an hour here and there e.g. 2-4h/month. But I'm beginning to think that might not cut it...

7
lohengramm 1 day ago 2 replies      
I would love to read a detailed explanation of the packaging process for iOS and Android.
8
MaulingMonkey 1 day ago 0 replies      
Neat! I was just complaining about the lack of multiplatform Rust games proving out the concept. If only there were some consoles or handhelds on the list... although iOS/Android being on the list is somewhat encouraging.
9
Buge 21 hours ago 1 reply      
The performance on my Android phone (Nexus 6) is not good. I would have thought Rust would be fast. The fps is fairly low, a lot lower than Maps or Chrome, at least when Maps isn't freezing up. And it seems the transitions between levels might be proportional to fps, because they are irritatingly long.
10
yorwba 21 hours ago 1 reply      
About floats implementing PartialOrd and not Ord:

I have not tried out Rust yet, but couldn't this be solved by wrapping the float in a single-field struct that checks for NaN on construction, implements Ord using PartialOrd and otherwise passes everything through to the ordinary float inside?

If this isn't possible, I'm definitely interested in the reasons.

11
esistgut 20 hours ago 1 reply      
It is an interesting educational project but staying away from Unreal or Unity3D is really a tough decision if your project is going to need some more features that are already developed and well tested in one of these engines.
12
newsat13 1 day ago 0 replies      
Great writeup. Eye opening that you can actually write iOS apps with rust. w00t, totally trying this for my next app. (It's a bit overly bit expensive for my liking but otherwise I would have given your game a shot).
13
educar 1 day ago 0 replies      
Is there a write-up on how the rust code ends up being embedded in the iOS app?
14
jlebrech 18 hours ago 1 reply      
that's a great twist on the snake game. does rust compile to wasm?
15
Ace17 22 hours ago 1 reply      
Screenshot, please!
16
remotehack 1 day ago 0 replies      
Did a double take on the title.
17
felipemnoa 1 day ago 7 replies      
This is probably not a popular opinion here in HN but why does it matter what language you use to make your game in? You can use virtually any language to make a game. From my point of view the best language for a game is that which makes you the most productive for cranking out that code. And we all have our own personal preferences about which language is best, which I think is fine, you should code with the one that you feel most productive with. At the end of the day users will not be able to tell the difference. All that matters is whether your game is fun or not.
19
JDK 9 modules voted down by EC jcp.org
312 points by virtualwhys  1 day ago   177 comments top 28
1
lvh 1 day ago 10 replies      
This is unambiguously a good thing. Jigsaw didn't solve the toughest problems it had hoped to solve, and there are much less impactful ways to accomplish some of these goals. As it stood, Jigsaw broke tons of applications that did non-trivial things with class loaders, reflection or even circular deps. As a consequence, while Jigsaw (after significant effort) might not be disastrous for Java, it's pretty awful for most things that aren't Java that target the JVM, like Clojure. Meanwhile, significant incremental engineering benefits in JDK9 were being held back by this sweeping change; so now maybe we can just get a better JVM without breaking the world.

As it stood, Jigsaw required a ton of engineering effort internally and externally and severely broke the backwards compatibility that has made Java such a workhorse of the enterprise. All of that at very questionable benefit to the end user: still no module-level versioning of deps. That doesn't mean this effort is totally wasted: the oldest parts of the stdlib were due for a touch-up.

Lord knows that I have my differences with how Red Hat operates sometimes, but I don't think suggesting their vote against Jigsaw is somehow a political plot to benefit OSGi or JBoss modules is reasonable. (FWIW: I also don't think OSGi's and JBoss' alternatives are great, but that's OK, because they're opt-in.) That theory has little explanatory power: why are all of these other companies voting against?

Disclaimer: I have no stake in any of the voting parties, but I do write a lot of JVM-targeting software.

2
karianna 1 day ago 4 replies      
Hi all, I'm the EC representative for the London Java community (~6000 Java developers, have lots of global outreach programmes etc) - here's our more detailed post on why we voted "No". https://londonjavacommunity.wordpress.com/2017/05/09/explana... - Happy to answer further questions although I've probably found this thread too late :-)
3
bad_user 1 day ago 1 reply      
Red Hat has voiced their concerns in the following document, linked in their comment: https://developer.jboss.org/blogs/scott.stark/2017/04/14/cri...

Found a follow up in this blog post: https://blog.plan99.net/is-jigsaw-good-or-is-it-wack-ec634d3...

Interesting to see Twitter voting No as well, with the comment "Our main concern is that it is likely that this JSR will prove disruptive to Java developers, while ultimately not providing the benefits that would be expected of such a system".

4
cakeface 1 day ago 6 replies      
I realize that there are some actual technical questions here but I can't help but be fascinated by the politics of this.

Are IBM and RedHat just against Jigsaw because of commercial interests in JBoss Modules and OSGI? Do they actually care about the technical merits? How can Oracle get to this point in the process without more buy in from the community?

Also I never knew who was part of the EC. Interesting breakdown on who voted for or against.

Will Java 9 go out without Jigsaw? I imagine if they have already modularized the JDK then it's impossible to release Java 9 without Jigsaw.

5
apetresc 1 day ago 1 reply      
I have no idea how the JCP works, but I find it odd that Google doesn't have a vote in this process. They're clearly one of the biggest users and developers of Java in the world, even if you discount Android. Did they choose to abstain from this process, or were they kept out somehow?
6
joshmarinacci 1 day ago 0 replies      
I was involved in these discussions when I was at Sun.. ten years ago! The major players and their opinions haven't changed. There is no solution that will please everyone.
7
rattray 1 day ago 0 replies      
For others who aren't familiar with Java's governing structure,

EC = Executive Committee

https://www.jcp.org/en/home/index

8
Insanity 1 day ago 2 replies      
I feel like some of these complaints, such as (edit: some of) those by redhat, could have actually been mentioned even before any development on jigsaw began.

It's a bit odd to me that these really large complaints come out now, instead of much earlier in the stage of JDK9. Maybe even before any work would have been done on red hat.

Such as noted here https://developer.jboss.org/blogs/scott.stark/2017/04/14/cri...:

> Fragmentation of the Java community

That seems like a valid concern, which could have been made much earlier. Or maybe they have but people decide to push on with jigsaw anyway?

I did not really follow JDK9 development all that much, apart from the occasional blog. But it just feels to me like at least _some_ of these issues should have been raised much earlier.

10
inglor 1 day ago 0 replies      
"Finally, we adjure all members and the spec lead to come back to the table and communicate directly to each other instead of blaming each other through blogs and open letters!"

Ouch, can anyone explain this drama and attitude to me? I've seen it ruin or stall the development of language or platforms multiple times now.

11
_Codemonkeyism 1 day ago 0 replies      
Jigsaw would have helped thousands of developers. Yes it does not solve all use cases, but no none of the use cases are solved.

Killed for political reasons because OSGi/IBM/Redhat and money. What the OSGi evangelists would not understand is how Jigsaw does not aim to solve the same problems as OSGi.

Bonus: If OSGi was a good thing, most developers would use it after decades of existence compared to practically noone. Which proves that OSGi does not solve the problems of the mainstream developer, which means making Jigsaw into OSGi-NG is not the way to go.

12
lenkite 1 day ago 3 replies      
I am afraid this is the death knell for the Java community process. I simply don't see Oracle taking out Jigsaw from Java 9 after investing so much effort into it. Also, a lot of folks were looking forward towards Jigsaw as a lightweight module system compared to heavyweight OSGi.
13
EdSharkey 1 day ago 1 reply      
I like to think of the Java ecosystem as a giant elephant balancing on one leg, careening downhill on its squeaky little JVM roller skate.

I still remember that Java started as a toy language. And for all its security features and incremental improvements to JVM byte codes over the years, it still looks like a toy.

The classloader is the roller skate and the zillions of interoperating jars are the elephant. For the most part, the elephant stays up and continues blasting down that hill.

I used to think the scene was funny, but now I'm uncertain. Only through tools like Gradle and Maven have we papered over the inadequacies of the classloader system and gotten some control of it.

I had hopes that jigsaw would give the elephant an option of a little car to drive or at least a second skate to aid with balance. But safety trumps all, and such a huge breaking change should be avoided if we can.

14
Yhippa 1 day ago 2 replies      
Am I interpreting Reinhold's comments correctly: Jigsaw isn't perfect but we need to get something out there, see what breaks, fix it, and iterate?
15
guilt 20 hours ago 0 replies      
This is not a good thing.

They were finally getting to have a fully built out environment - which could lead to better end user packaging, stripping down a library to its bare essentials.

For instance: what have jpackage achieved for a distribution like RedHat? Does anyone's Java App even pick up these RPM based dependencies for Classpath? Heck no.

Every known large scale Java application pretty much bundles a bunch of JARs. (Probably not logstash - but only one exception)

It is very clear that JDK9 will not be as awesome as it was originally supposed to be. Deeply disappointed.

16
readams 1 day ago 2 replies      
It's really a shame as OSGi is a nightmare.
17
LeanderK 1 day ago 1 reply      
can somebody provide a bit of background? I was looking forward to JDK 9 modules, but i didn't spent much time reading into it. I am quite surprised that it was voted down, since i think introducing modules is a step in the right direction. Also, judging from the comments, it seems that there will be a second vote?
18
CodeSheikh 1 day ago 2 replies      
Interesting to see NXP Semiconductors there. Being primarily a semiconductor manufacturer how do they use JVM based (Java etc) language ecosystem in their products? SDK to program their boards for development purposes?
19
yegortimoshenko 1 day ago 2 replies      
The single most important feature of Java is reverse compatibility. It is a temporal prerequisite of "write once, run everywhere".
20
amyjess 1 day ago 1 reply      
Bye bye JCP then. Oracle's just going to go it alone from now on.
21
frik 1 day ago 3 replies      
How can one join the EC? Smaller and larger companies with very different focus, and even a community. Interesting.

 Azul Systems, Inc. Yes Credit Suisse No Eclipse Foundation, Inc No Fujitsu Limited Yes Gemalto M2M GmbH Yes Goldman Sachs & Co. Yes Grimstad, Ivar No Hazelcast No Hewlett Packard Enterprise No IBM No Intel Corp. Yes Keil, Werner No London Java Community No MicroDoc Yes NXP Semiconductors Yes Oracle Yes Red Hat No SAP SE No Software AG No SouJava Yes Tomitribe No Twitter, Inc. No V2COM Yes

22
crudbug 1 day ago 0 replies      
Glad to see this, not a fan of "requires / exports" keywords.

Java packages already use import. So, logically export should be associated with packages not modules.

Module being meta-package, should follow these.

"requires <package>"

"declares <package>"

23
mal34 20 hours ago 1 reply      
Oracle (the neXt Micro$oft) is killing ex-Sun's Java language and software.
24
popopobobobo 1 day ago 2 replies      
Excuse me. Could someone please explain why is goldman on the list? I do not think they are qualified as a reviewer based on their engineering standard.
25
haimez 1 day ago 3 replies      
26
_pmf_ 1 day ago 2 replies      
> Many disagree with circular module dependencies (I am one) at least at this initial stage.

Does disagreeing with gravity allow you to fly?

27
nailer 1 day ago 3 replies      
I have little idea about cars (Java), but in carpet land (JS): we had the largest module ecosystem ever and it was pretty much ignored by TC39 in favor of a new solution, which has technical benefits (it's async), but there's no migration path from the current standard to the new anointed 'standard'. Sometimes technical committees just refuse to pave cowpaths. Look at the amount of lines it takes to do an XHR in the 'fetch' API vs superagent in, say, 2012.
28
merb 1 day ago 1 reply      
Most no votes are only because they are concerned about their might with the JCP....This is a sad day for software development.

 ... is concerned about the lack of a healthy consensus among the members of the Expert Group From our point of view the lack of consensus inside the EG is a dangerous sign I understand IBM's and others reason for their "No" vote and heard many similar concerns by e.g. the OSGi community or contributors behind major build systems like Maven, Gradle or Ant. Most of their concerns are still unanswered by the EG or Spec Leads What we are especially concerned about however, is the lack of direct communication within the expert group We echo ... comments in that we absolutely recognize the tremendous achievements and the great work that has been carried out until now by the EG members as well as (and especially) by the Spec Lead himself.
policits will either delay Java 9 or completly kill it. some people just want to show their teeth. sad.

20
U.S. life expectancy varies by more than 20 years from county to county washingtonpost.com
228 points by fmihaila  2 days ago   141 comments top 21
1
generj 2 days ago 6 replies      
One possible partial explanation for this is the same reason why the Bill Gates Foundation wasted a bunch of money fostering small high schools. Smaller high schools were some of the best performing schools...but also some of the worst [0].

The answer is just that small counties have high variance. By chance some small counties will be a lot higher or lower than the national average.

I would be interested in seeing a Cox Proportional Hazards Model would show if the remaining changes are related to pollution, meth, economics, etc.

[0]http://marginalrevolution.com/marginalrevolution/2010/09/the...

2
binarymax 2 days ago 9 replies      
Not only county-to-county. My friend did his PhD dissertation on how (at least in Rochester, NY), life expectancy varies by decades between zip codes. His research focused on urban Food Deserts and studied how the lack of access to healthy food nearby restricted diets to that available in convenience stores (chips, soda, etc). I wish I had access to his dissertation, but he just defended a month ago and cannot find it in any publications right now.

EDIT: Not my friend's paper, but here is a similar study: https://hrs.isr.umich.edu/publications/biblio/8355

3
davidf18 2 days ago 2 replies      
A very similar article by many of the same authors was reported in JAMA in Dec 2016.

From JAMA Arch Int Med article from today, p. E6:"...At the same time, 74% of the variation was explained by behavioral and metabolic risk factors alone, while one marginally more variation was explained by socioeconomic and race/ethnicity factors, behavioral and metabolic risk factors, and health care factors combined."

From the WaPo article:"Mokdad said countries such as Australia are far ahead of the United States in delivering preventive care and trying to curb such harmful behaviors as smoking. Smoking, physical inactivity, obesity, high blood pressure these are preventable risk factors, Mokdad said."

In NYC, and not just Manhattan, New Yorkers are doing better because of a number of interventions initiated in 2001, when Mayor Bloomberg and Dr. Tom Frieden took over as Mayor and Health Commissioner.

Adult smoking is 14% in NYC, 24% in Louisiana. Raising the cost of tobacco contributes more than half the effect of getting smokers to quit and to stop teens from ever starting.

NYS tobacco tax is $4.35 per pack and the city is an additional $1.50. Cigarette sell for at least $12 per pack here.

The tax is $1.08 in Louisiana.

Mokdad mentions Australia, where the tobacco tax is $14 per pack plus an additional $2 sales tax.

The ACA made a huge mistake in not raising the about $1 US Federal tax to a much higher number for the 13 billion packs smoked each year.

Raising the Federal tax by $4 would raise at least $30 billion each year for helping those with high risk preexisting conditions.

4
CoolGuySteve 2 days ago 1 reply      
This map seems correlated with socieconomic status and all the health implications that go along with that. And it looks similar again to Republican voting districts. It's Sarah Palin's "Real America", so to speak.

One thing the Democrats would do well to focus on is the fact that there's a large portion of the country that is sick, where the statistics look more like an underdeveloped country. Those of us who live in the major cities would do well to empathize with this other part of the country and their malaise, even if for our own sake of having a more sane and less partisan government.

5
nolemurs 2 days ago 3 replies      
I couldn't help but notice that the top three counties (Summit, Pitkin, and Eagle counties in CO) are largely empty space with some of the country's best ski resorts. I suspect that the life expectancy there may largely be driven by fit retirees who move there.
6
JohnGB 2 days ago 2 replies      
I really wish that journalists would learn some 3rd grade maths before writing anything with numbers in it. How does the life expectancy vary by "more than 20 years" when the difference between the counties with the highest (85) and lowest (67) life expectancies is 18 years?
7
tn135 2 days ago 3 replies      
And why is that a surprise ? The public schooling in USA has widely distorted the housing market creating ghettos where similar people will get clubbed together.
8
legitster 2 days ago 1 reply      
Is there any reason this isn't just selection bias? Young, healthy, people with good careers travel to big cities. The rest stay behind. Do we have data to compare those who stuck around vs those who grew up there but moved away?
9
brooklyntribe 2 days ago 1 reply      
Rural upstate NY. Always checking out the obits. There were some weeks there when no one broke 62. Seems to stabilized, but more rural than Appalachia. The winters are brutal, MDs are hard to find.
10
dkarapetyan 2 days ago 0 replies      
Heck I bet within a 5 mile radius it varies. Just go to east palo alto and then go to "not east" palo alto. The reasons I think are pretty obvious and almost certainly correlated with how affluent your neighbors are. Money basically solves all problems when it comes to wealth related problems like access to good schools and healthcare. Even though in theory healthcare and education should not be so strongly coupled to money.
11
dainichi 2 days ago 1 reply      
The "life expectancy of a county" is not a trivial thing to define if you ask me. Is it how long children born now in a county can expect to live, no matter where they live or die, does it only depend on people who die in the county, or is it somehow weighted by the time people live in the county? Maybe there's a standard definition, but my guess is that most people don't know.
12
Zyst 2 days ago 2 replies      
There's a phrase that stood out to me:

>We are falling behind our competitors in health. That is going to impact our productivity; thats going to take away our competitive edge when it comes to the economy, Mokdad said. What were doing right now is not working. We have to regroup.

What's the logic behind this? Out of curiosity. It's a morbid thing I hesitate to say, but from a purely utilitarian view isn't it better for a country from a macro perspective if people die as close as possible as they finish their working life and retire?

I might be completely off base there, and this is mostly a request for more information, not saying people should die early. As far as I'm concerned I hope we all live to 200.

13
protomyth 2 days ago 0 replies      
Oglala Lakota County which is completely contained inside the Pine Ridge Indian Reservation is served by Indian Health Service with 94%+ of the population being eligible for free, government provided medical care. I notice, other than pointing it out on the map, they didn't discuss it in the article.

http://www.richheape.com/american-indian-healthcare.htm

14
notadoc 2 days ago 0 replies      
Of course life expectancy is going to vary by county, because the variables that contribute to life expectancy; income, lifestyles, dietary habits, quality of health care, and access to health care are vastly different.

I assume you'd need a broad cultural shift in attitudes about food consumption, obesity, physical activity, and vast health care networks that ignore income to change it.

15
MR4D 2 days ago 2 replies      
Some random information in this article....

For instance, they cite almost no lung cancer deaths in Summit county Colorado, while the highest rate is found in Florida.

Well, duh - why the heck would a lung cancer patient want to try and breathe thin air at 9,000 ft??? Sea-level would be much more comfortable.

Not picking on the statistics, but they could have pointed out that some of the variation is purely logical.

16
pnathan 2 days ago 0 replies      
Not surprised. Visited the Midwest a few years ago, and obesity was ridiculously off the charts compared to the Intermountain West or WA/CA. The are significant social factors playing out on a wide and complex scale.

My advice, if you're stuck in one an area with unhealthy habits, is to move to an area with healthy habits.

17
worldsayshi 2 days ago 0 replies      
> "We are falling behind our competitors in health. That is going to impact our productivity; thats going to take away our competitive edge when it comes to the economy"

Interesting how it seems like health itself isn't seen as the target metric. Nope, economy is what's important.

18
cylinder 2 days ago 0 replies      
Not everything is the government's fault. Go observe the lifestyle choices and dietary choices some of these people make, and then add in the drinking and smoking.
19
irrational 2 days ago 5 replies      
What about from country to country. I'm from the US and, at least where I live, it is rare to see someone smoking. Second hand smoke has basically become a thing of the past. But this past week we went up to Vancouver BC and we were shocked at how many people were smoking. It seemed like you couldn't go 10 feet without walking through another cloud of second-hand smoke. Apparently the anti-smoking campaigns of the US never made it up North ;-) Or maybe their socialized medicine makes it so that they don't have to worry about health consequences as much. I don't know. I just wonder if the life expectancy is lower in Canada (or at least Vancouver) since smoking seems to still be A-OK.
20
throwawaysf9 2 days ago 0 replies      
I create a throw away account and say this almost every time a post hints at something like this: It is commonly taught at the school of public health at Cal that "your zip code is a stronger indication of your life expectancy (and quality) than the color of your skin". This has been known for years now.
21
RichardHeart 2 days ago 1 reply      
If there was no difference in behavior at all between all of the people of the USA, I think you'd still see pockets of more and less progress, just due to the natural distribution of dying. I'm curious what the expected variance by zip code would be if everyone's behavior was identical.

As the article states, some countries are making more progress than the USA on modifiable risks, such as smoking. Australia is one such country. If advertising health were as profitable as advertising vices, we'd be in better shape.

21
Thunderbirds Future Home mozilla.org
293 points by buovjaga  1 day ago   116 comments top 19
1
newscracker 1 day ago 5 replies      
Last year when donating specifically to Thunderbird was made possible on mozilla.org, I donated to the project because it has provided a lot of value over the years.

Recently I started looking at the discussions on the tb-planning mailing list and it looks like we'll get a revamped (fully rewritten) Thunderbird. That sounds like a very long project to me - probably a few years just to bring it to what Thunderbird already provides today. Plus the extensions system needs to be revamped as well (similar to what's happening on the Firefox side with XUL ones going out). Getting Exchange calendaring done is also not a priority because of the complexity and the effort needed. So it looks like we will get a better maintainable product after some years. I'm not sure if that's going to appeal to many people to donate.

I'm happy with Thunderbird and some extensions that I use regularly, with the only exception being calendaring support for Exchange being very poor and unreliable (even with the Exchange EWS Provider extension or with external solutions like DavMail). Since I don't like taking risks with email client alpha or beta releases because of the fear of data loss (and with huge mailboxes, even detecting data loss would be a chore), I'll just stick with the current version and hope that the new revamped one comes in a stable form sooner (of course, I will donate periodically). I'm excited and afraid!

2
elipsey 1 day ago 6 replies      
My CS instructor once asked everyone in the class to "raise you hand if you still use an email client." My hand was the only one that went up, and a couple of people laughed and said "really?"

I think that was almost 10 years ago.

3
Chirael 1 day ago 1 reply      
Day to day I use my email provider's regular web interface but I use Thunderbird every few months when I need to do a massive email cleanup - there is no other tool I'd rather use and it's indispensable to me for that purpose.

Also the article states, "In many ways, there is more need for independent and secure email than ever" and I agree 100%. Thank you to everyone who works on this project!

4
nickcw 1 day ago 1 reply      
I use Thunderbird as my main email client and I have a bit of a love-hate relationship with it! I have a complicated email set up with 1000s of folders, and lots of mail accounts and filtering and by and large it does a great job.

I still use mutt when I really want an email powertool, but I can't use it as my daily email client any more (and haven't for years) now that HTML emails are so prevalent.

I use lots of plugins with Thunderbird (Copy Sent to Currrent, Enigmail, External Editor, Nostalgy, QuickFolders, Identity Chooser, Mail Redirect, ...) to try to bring back some of the functionality I'm used to with mutt and it works quite well now.

In recent months I find Thunderbird needs restarting once a day which is frustrating. It goes into some kind of internal loop processing an email and never returns. Probably a consequence of too many plugins!

5
stinkytaco 1 day ago 0 replies      
I see this:

>But there are still pain points build/release, localization, and divergent plans with respect to add-ons, to name a few. These are pain points for both Thunderbird and Firefox, and we obviously want them resolved. However, the Council feels these pain points would not be addressed by moving to TDF or SFC.

and then this

> We have come to the conclusion that a move to a non-Mozilla organization will be a major distraction to addressing technical issues and building a strong Thunderbird team. Also, while we hope to be independent from Gecko in the long term, it is in Thunderbirds interest to remain as close to Mozilla as possible to in the hope that it gives use better access to people who can help us plan for and sort through Gecko-driven incompatibilities.

So I'm not sure I fully understand their direction. Are they simply less focused on solving those issues right now?

I use Thunderbird but don't really have a horse in the race, I get what I need out of it and I support them, I'm just curious.

6
finnjohnsen2 1 day ago 3 replies      
I don't use Thunderbird, but I like that it exists and I hope they survive. Perhaps because it's my life boat if I need to abandon spying web hosted clients (gmail etc).
7
mariusmg 1 day ago 0 replies      
So for the moment, at least, Thunderbird stays with Mozilla. That's good.
8
phantom_oracle 1 day ago 1 reply      
I hope Thunderbird lives on for a long time, but in the FOSS world, one should never depend on a single piece of software for their day-to-day functionality, thus here are some alternatives:

https://alternativeto.net/software/mozilla-thunderbird/?lice...

You must also keep in mind that desktop mail-clients make the (otherwise) complicated PGP-encryption of emails a bit more user-friendly.

9
tracker1 1 day ago 1 reply      
In my mind, there are a handful of things that are essential to getting Thunderbird back to a usable state... some of these could be plugins...

First, the exchange/calendar integration options clearly suck... establishing a clear calendar interface as a built-in with extensible points for plugins for authentication/sync of calendars would be a good start.

Second, likewise with calendar auth/sync would be an extensible interface for folder sync and authentication, so that a cleaner integration for common providers based on an underlying IMAP can be used... this way the conventional "junk, spam, inbox, sent" folders could be presented the correct way as well as the underlying storage for a given provider.

Also, along with calendar/email would be more extension points for scheduling, contacts, etc...

As it stands, even if there were different plugins for a google calendar and an exchange/o365 calendar, contacts, etc... if the underlying pieces can be shared, it would be a better user experience.

Moreover would be some serious reconsideration regarding the UI/UX... I'm a big fan of material design, but some variation on that coming a lot closer to a Gmail app for desktop would be a really nice start... but getting a calendar/task/contacts integration points and primitives for extension would go a long way here. Having the core UI going the same direction as Servo, and having most of the UI/Extensions being HTML/JS based would be nice.

Likewise, NPM compatibility for extensions' modules would be nice as well.

10
clishem 1 day ago 2 replies      
> A Bright Future

> The long term plan is to migrate our code to web technologies

11
echodevnull 1 day ago 0 replies      
Here's a proposal on how the move to a new Thunderbird based on web technologies could be handled: https://groups.google.com/forum/#!topic/tb-planning/SPs8gzO5...
12
pkaler 1 day ago 0 replies      
I've gone from using Sparrow to Mailbox and now to Spark on macOS and iOS. Every time a good email client comes around, someones seems to kill it.
13
systematical 1 day ago 1 reply      
I use thunderbird because my work uses gmail. I guess (maybe) I could add the secondary work gmail account to my regular gmail. Not sure, the other alternative is to keep a separate incognito window or keep work email open in a separate browser.

I just use Thunderbird instead.

14
edpichler 1 day ago 1 reply      
I like to read about these open source organizations apparently very well administrated. It's a very interesting business model to me.
15
panamafrank 1 day ago 0 replies      
Have mozilla considered Tracy Island?
16
JetSpiegel 1 day ago 0 replies      
Frankly, I think it would be easier to write a GUI over mutt than to rewrite Thunderbird.

I was a heavy user of Thunderbird, but after migrating to mutt, it's basically obsolete. The only pain point is really HTML, but converting it to text is Good Enough most of the time. Outlook-produced emails still look like crap, but I can click a button and open it on Firefox.

Mutt is not only alive and kicking, there's the new NeoMutt project that is the NeoVim for Mutt. We have initial Lua scripting capabilities now.

http://neomutt.org/

17
frik 1 day ago 6 replies      
A Bright Future

The Thunderbird Council is optimistic about the future. With the organizational question settled, we can focus on the technical challenges ahead. Thunderbird will remain a Gecko-based application at least in the midterm, but many of the technologies Thunderbird relies upon in that platform will one day no longer be supported. The long term plan is to migrate our code to web technologies

Mozilla dumps XUL tech from gecko left and right, removed proper "classic" mod support from Firefox... how is this a bright future. Thunderbird as a big XUL app is stuck with an soon to be not supported old gecko. And how is the plan to slowly rewrite it viable? Replication of the dated UI with HTML5 will be an even bigger clusterfxxk.

We need a proper open source offline client. And it should have a modern UI with at least conversation view like Gmail. Wasn't there a HTML5 based email client in FirefoxOS. Start with that code and set up a new Mozilla foundation funded offline email client, and keep security support for Thunderbird until the new email app is ready.

18
orionblastar 23 hours ago 0 replies      
I hope The Document Foundation gets it to add to Lubreoffice to be a foss alternative to Outlook.

I get thousands of emails a day. I have several email accounts as I switch to a new one because the old one got a lot of spam. I email my friends and family my new email but they keep writing to my old emails. Message filter helps me sort stuff into different folders to find important emails and attachments.

19
Vinnl 1 day ago 1 reply      
tl;dr Thunderbird will remain independent, but legally and fiscally be part of Mozilla (rather than e.g. The Document Foundation (of LibreOffice) or the SFC).

Also of note: apparently Thunderbird is receiving enough donations, and has been for a while, to give faith that it will be able to manage just fine independently. Good news IMHO.

22
Gallery of programming UIs docs.google.com
307 points by Chris2048  2 days ago   76 comments top 37
1
osullivj 2 days ago 3 replies      
No Excel? It's the most widely used visual, grid based functional programming environment. MS claim more than a billion Office users. Any time one of them edits a formula in Excel they're programming. Would also be nice to see AVS/Express in there. Jeff Vroom built the AVS VPE more than 20 years ago.
2
lachenmayer 2 days ago 0 replies      
I threw my own hat into the ring with my undergraduate thesis, in which I wanted to create an editor for Elm which took advantage of all the niceties that strongly-typed functional reactive programming affords. I also took a closer look at some of the projects mentioned here (Light Table, Lamdu, Tangible Values, Scratch/Hopscotch, Inventing on Principle etc...). You can find its (now totally obsolete) remains here: https://github.com/lachenmayer/arrowsmith

The idea was basically: you should be able to use different editing UIs for values of different types. The lowest common denominator is code as text, but you should be able to switch on different editing UIs for numbers, colours, graphics, record types etc. The second half of this video shows off some demos for all of these: https://www.youtube.com/watch?v=csRaVagvx0M

I also had some (far too) ambitious plans for stuff like type-driven drag & drop editing (mainly for function application), which I unfortunately didn't get to implement, but you can read about them in the report: https://github.com/lachenmayer/arrowsmith/blob/master/report...

By far the most intellectually stimulating work I've ever done! We're definitely not there yet, to paraphrase Alan Kay :~)

3
jonathanedwards 2 days ago 2 replies      
Author here. This doc is just my personal curation of notable/interesting UIs for programming, as inspiration for my own research. Not attempting to be comprehensive, and being somewhat opinionated on the level of generality required to be called "programming". Thanks for all the links, I'll check them out.
4
gavinpc 2 days ago 2 replies      
Wow. This is the hardest problem in programming. This is a very thought-provoking tour of nearly forty-years' worth of valiant efforts to tackle it.

Add Sketchpad, and you'll have half a century.

https://en.wikipedia.org/wiki/Sketchpad

5
i336_ 2 days ago 0 replies      
NOTE! NOTE! NOTE!

Some of the slides have annotations attached to them! These provide extra detail, or links to research papers.

Tap/click the gear icon at the bottom and select "Speaker notes", or press S.

Alternatively, if you clicked to open this in the editor view (https://docs.google.com/presentation/d/1MD-CgzODFWzdpnYXr8bE...), you already should be seeing these notes just underneath each slide.

6
DougWebb 2 days ago 1 reply      
I'm definitely in the "prefer to read" camp. A lot of these, especially the more graphical ones (that aren't just fancy syntax highlighting) seem way too disjointed for me. I don't want my code floating around in separate bubbles/windows, I want to see it in-context, and I want to be able to see a lot of it at once.

I do think that visual representations of code can be very helpful for analysis, especially if you can see the code running in the visual representation while you're debugging it. But for editing, I want the text view.

The other thing that looks really handy is the ability to use proper mathematical symbols, when that's appropriate for what you're doing. The point of those symbols, afterall, is to be a concise textual representation of the concepts. The problem is our input tools; a keyboard and mouse are really not suitable for a written language developed for hand-held pens and paper. That might change soon; the latest touch screens might make it possible to have a notebook-sized pad with very-high-precision pen input as a common third input device, next to our keyboards and mice. If that becomes common, languages and IDEs that make use of it might become common as well.

7
erikj 2 days ago 3 replies      
I think Symbolics' Dynamic Windows and CLIM could be added to this gallery as well, as early examples of presentation-based user interfaces:

https://dspace.mit.edu/bitstream/handle/1721.1/41161/AI_WP_2...

ftp://publications.ai.mit.edu/ai-publications/2004/AIM-2004-005.pdf

8
sjclemmy 2 days ago 1 reply      
One of my formative experiences with programming er, programs, was 'The Quill' (https://en.wikipedia.org/wiki/The_Quill) for the ZX Spectrum. Perhaps not a true programming environment but I have fond memories of writing an adventure called 'The Jewels of Zenagon.' Fun times :)
9
jonathanedwards 2 days ago 1 reply      
Author here. If you can't access the Google Doc I put up a slideshow here: https://docs.google.com/presentation/d/1MD-CgzODFWzdpnYXr8bE...Missing the links in the speaker notes though, so I'll come up with a better solution.
11
jaimex2 2 days ago 2 replies      
I'd love to see Macromedia Flash added. Yes, Flash outstayed its welcome on the web but man was it fun and direct to program in.
12
nzonbi 2 days ago 1 reply      
Another programming UI: I have been designing a graphic based programming language: xol. In xol code is represented with graphical elements instead of text. A prototype of the first version (partially working code editor) can be seen here: https://github.com/lignixz/xra9

There is already a second version design, 2x better looking, not revealed yet. I also have some additional ideas that may improve the language further.

13
thanatropism 2 days ago 0 replies      
Also missing: the system dynamics paradigm, where boxes are accumulators and arrows are flows. Many packages like this have existed and still exist.

This is (a recent iteration) of Stella: http://sdwise.com/wp-content/uploads/2013/11/System-Dynamics...

Here's VENSIM: http://www.makrosimulation.de/Screenshot-Staatpublish.jpg

Matlab's SIMULINK is kind of a generalization of this paradigm (less convenient because of too many choices to make): http://ctms.engin.umich.edu/CTMS/Content/Suspension/Simulink...

14
vanderZwan 2 days ago 0 replies      
The second-to-last slide mentions schematic tables, an idea idea proposed for Subtext 2 by Jonathan Edwards (who seems to be the author of these slides). Here is a video of him explaining the concept:

https://vimeo.com/140738254

Of all the UI interfaces, I thought this was one of the few that made the screen look less cluttered and easier to follow. I would have loved to see it elsewhere, even if only as a debugging tool.

15
tluyben2 2 days ago 1 reply      
Thanks Jonathan, that is just excellent again. I, these days, dedicated myself to get myself out of my chair (I program standing up for around 7 years now but that's not enough getting older); my goal is being able to walk while programming. And so many interfaces you have here can never even aspire to do that. I finally am at a point with something (after 10 years of having this same goal behind me and about 40 failed attempts with prototypes) that I can say it is not slower than sitting down while walking (I am well aware this is also related to getting older; I am capable of doing far more in my head than I was 20 years ago). And it is interesting to see how all these interfaces depend on your being stationary and some not even only stationary but stationary in front of a quite massive screen (visual programming often needs rather large surfaces for anything trivial).
16
arca_vorago 2 days ago 0 replies      
Nice to see Epics Unreal Engine 4 blueprints in there. Right now I am heavily relying on blueprints instead of c++ as I wait for some features to mature, and the interface is really growing on me. I just find it easier and faster tot prototype in.

https://docs.google.com/presentation/d/1MD-CgzODFWzdpnYXr8bE...

17
zeveb 2 days ago 0 replies      
Very cool review. The one item I was hoping to see, but missed, was UserLand Frontier: http://www.spicynoodles.net/projects/postgresql/images/pgdem...

It was a really neat outliner which was extended to provide a pretty neat scripting environment, and eventually a web server. I used it to run a web site from my dorm room in the late 90s.

It's interesting to see org-mode (another outliner) now be another example of an awesomely extensible outline-based software system.

18
dickbasedregex 2 days ago 0 replies      
I was building a project a few years ago where I really needed some resources on visual-programming UI's and couldn't find much of any information on the topic. This is nice. I saved this.
20
specialist 2 days ago 0 replies      
Ridiculously comprehensive. A great start. Thank you.

Maybe wiki-fy this? So that people can add source links?

Mid 90s, I spent some time trying to make a "structured editor" for VRML-97, where the scene graph and the textual representation were linked (two way editing). I didn't get very far. I'm glad others continue to work on these ideas.

21
l1n 2 days ago 1 reply      
I'd suggest updating the link directly to the presentation: https://docs.google.com/presentation/d/1MD-CgzODFWzdpnYXr8bE...
22
interfixus 2 days ago 0 replies      
Early Delphi versions were influential and in widespread use. They might deserve an honorable mention.
23
jstewartmobile 2 days ago 1 reply      
This was kind of depressing. It seems like almost everything aside from IPython and Mathematica in this list is still worshiping at the altar of PARC. That, and two of the most common UIs for GSD (Vim and Emacs) aren't even on the list.

On a personal note, I'm particularly hostile to the Squeak/LabView style of flowchart programming. Once you get something even slightly non-trivial going, you can just feel your life force being drained by all of the scrolling and zooming and dragging and futzing.

24
njstraub 2 days ago 0 replies      
might want to add in NodeRED too! https://nodered.org/
25
AriaMinaei 2 days ago 0 replies      
Computer programs are vastly under-utilised when it comes to programming computers.
26
vmeson 1 day ago 0 replies      
Can anyone provide a link to the "versions and diffs" tool for reviewing git history? Is this a separate program or just highlighting various UIs that several git tools use?
27
tibu 1 day ago 0 replies      
How about Notepad / Notepad++ / VS Code / Atom / etc ?
28
maaaats 2 days ago 0 replies      
I would have liked a summary of the idea/usage/strengths of the various screenshots. Just by looking at them I can understand something, but I think there may be many interesting concepts here.
29
bbrik 2 days ago 0 replies      
This is a great collection!There are a lot of replies adding other examples. I am gonna add Controller Mate, which I use daily.
30
_FKS_ 2 days ago 0 replies      
WolframAlpha?
31
zem 2 days ago 0 replies      
greenfoot (slide 29) seems to have a lot of nice UI ideas. i could definitely see using it as an ide to explore a new codebase.
32
solomatov 2 days ago 1 reply      
I don't understand why IntelliJ family of products aren't included here. Also, you include mbeddr, but don't include JetBrains MPS which it's based on.
33
galfarragem 2 days ago 0 replies      
And Trello? (kanban)
34
ekvintroj 2 days ago 0 replies      
Oh, Smalltalk <3
35
vincnetas 2 days ago 1 reply      
36
njstraub 2 days ago 0 replies      
Might want to add in https://nodered.org/ too
37
ensiferum 2 days ago 2 replies      
Hmm, seems a bit incoherent collection of screenshots of IDEs, editors, desktop environments and whatnot.

Not sure what's the idea here.

23
Repl.it React Native mobile apps in browser repl.it
312 points by nabraham  2 days ago   61 comments top 18
1
amasad 2 days ago 4 replies      
Hey, ceo/cofounder of Repl.it here. Was pleasantly surprised to see this on HN! React Native and Expo has taken the world of mobile development by storm and we're happy to play a part in spreading this amazing technology.

Many of you might know us from being one of first in-browser REPLs (for 30+ programming languages https://repl.it/languages). Our mission is to make programming more accessible and that's why, more recently, we've been also working on tools for educators wanting to teach programming. Our Classroom product (https://repl.it/classrooms) makes it easy for anyone to teach programming online and in physical classrooms.

Happy to answer any questions.

2
tyingq 2 days ago 1 reply      
The title is confusing me a bit. This is writing/deploying react native apps in a browser, right? As opposed to running them in a browser.
3
ccheever 2 days ago 1 reply      
Hi- Cofounder of Expo here. Was really fun to work with the Repl.it team on this. This came out really well. It's such a good way for novices to learn to make mobile apps.
4
untog 2 days ago 3 replies      
But can I render my React Native for Web app in an in-app webview, on Repl.it?
5
treytrey 1 day ago 0 replies      
FYI - I just released this:

Dynamic, Responsive Layout for Universal and Orientation-Aware React Native Apps (works in Expo, XCode et al): Flexbox-based layout library that makes building Universal layouts in React Native more fun and much easier than using Flexbox and JS directly.

Repo: https://github.com/idibidiart/react-native-responsive-grid

aspectRatio demo: https://m.youtube.com/watch?v=Nghqc5QFln8

breakPoints demo: https://www.youtube.com/watch?v=GZ1uxWEVAuQ

6
jonesnc 2 days ago 0 replies      
There And Back Again: A React Developer's Tale
7
pkamb 2 days ago 1 reply      
> an SDK like XCode

"Xcode"

8
sohkamyung 1 day ago 3 replies      
(Please downvote if my comment is off-topic)

Is it possible to support creating and deploying Minecraft server mods using repl.it?

At one point (2-3 years ago) my son was interested in doing this but the pain of installing Java, an IDE (NetBeans), getting started in Java programming and deploying it on a vanilla Minecraft server was just too much.

Or are there better options now for Minecraft server mod programming?

9
ztratar 2 days ago 0 replies      
Love Amjad and Repl.it!!!! :D
10
appleflaxen 2 days ago 1 reply      
I would prefer not to create an account to learn something... any chance this will change?

nevermind; found the anonymous option. Thanks!

11
findjashua 2 days ago 2 replies      
how does repl.it compare with expo's snack (snack.expo.io)?
12
rw2 2 days ago 3 replies      
It's a good tool, but how does it compare with: https://github.com/decosoftware/deco-ide

It seems that it also renders react native and has better features around styling.

13
q1t 1 day ago 1 reply      
Is it possible to structure a project inside repl.it? I mean I have a small React project and wanted to replicated it in your IDE/repl
14
MarcusDavenport 1 day ago 0 replies      
Repl.it should be in every school in America! We need more innovation like this! #Amjad4president
15
doozler 1 day ago 1 reply      
I really want to learn React Native and this looks like a fantastic tool. Does anyone have any documentation that easily explain design patterns in RN? Does all code live in 1 file?
16
thebigredgeek 1 day ago 0 replies      
This is awesome!
17
MarcusDavenport 1 day ago 0 replies      
It's always great when people leave Facebook (react native team) and create amazing products to improve the world!
18
it_learnses 2 days ago 1 reply      
how do you make money?
24
Warren Buffett says health care costs are a bigger problem than corporate taxes nytimes.com
246 points by wojt_eu  1 day ago   334 comments top 25
1
dalbasal 1 day ago 17 replies      
Im not American, but Ive been hearing about your health system for a several years. Ironically, I know more about it than my own countrys (Ireland).

Several years ago, there seemed to be a lot of talk about how much The US spends (private & public) per capita on health. Its a lot more than everywhere else. This was usually presented in the context of the health care regime A UK-esque system, a Swiss-like system, etc.

Lately, that comparison seems to come up less. Obama-care, Trump-care or Bernie-care would mostly deal with who pays & how, not how much.

The who pays question is a favourite ideological one so politicians and commentators are comfortable with it. But, I think the how much question is probably the more important one, and the harder one to solve. If the US could get costs down to average European rates, then Im sure a workable system could be found within the confines of most ideological frameworks.

The problem is that getting costs down is almost impossible. Costs are salaries of doctors & nurses, a giant pharmaceutical industry, thousands of radiologists, ultrasound technicians, the machines they use (far more frequently than europeans)

Getting costs down to EU levels would be mean the medical industry shrinks like manufacturing shrunk two generations ago.

I dont have a solution to suggest, but I do suggest toning down the ideological discussion. The problem is more of a technical one.

2
bedhead 1 day ago 9 replies      
There are many, many problems with healthcare in the US. Off the top of my head, the big ones are:

* Endless number of middlemen and administrators. * Every player in the healthcare chain benefits from higher prices.* No price transparency. * Tacit collusion is rampant.* "Cost no object" mentality to treating the dying.

The last one, while insensitive, is true nonetheless, and it's alarming that over 50% of all healthcare spending takes place in the last two years of a person's life. We have basically decided that it's okay to spend literally any sum of money on a dying person in order to prolong life by an average of a few months. And the problematic word there is average, because some people do live a lot longer, and that's what we all look to. I realize this is grim and seemingly lacks humanity, but unfortunately that doesn't make it not true. Charlie Munger, who is on the board of Kaiser Permanente, said this same thing yesterday..."over-treatment of the dying" was the biggest problem they faced.

It's reminiscent of our approach to college education - justified at any cost. So we push millions of kids into a schooling system that's not right for them, and the result is a lot of crappy education, worthless degrees, student loans, etc. Once we flip the switch to "there is no price you can put on _____" things get sideways FAST.

3
pavlov 1 day ago 11 replies      
The middleman role of insurance companies in American healthcare seems completely useless. They're not serving patients, doctors nor the national economy by siphoning off enormous profits from the 17% of GDP that gets spent on healthcare.

Getting rid of them would be extremely hard, of course, given how well entrenched they are thanks to lobbying and regulatory capture.

Insurance companies are a massive-scale version of the car dealerships that have managed to keep Tesla out of many US states by taking advantage of local legislation -- nobody would want to deal with a car salesman or an insurance company given the choice.

4
heisenbit 1 day ago 1 reply      
In the US the health care cost since 95 went from 13.1 to 17.1 while in Germany they went from 9.4. to 11.3. It is actually way worse than the article tells if one considers the age structure of the two countries where Germany has 21.7% over 65 vs. the US with 15.25%.

The non tangible cost are also non negligible. There is friction in the job market as changing job risks incurring a potentially catastrophic coverage gap. There are bizarre industries focused on renegotiating issued medical bills, collecting those or managing the health related bankruptcies.

Pricing of pharmaceutical usually generally defies the laws of gravity as the incentives of regulators, suppliers, distributors, doctors and insurers have been distorted beyond anything resembling a fair playing field. In such an environment playing games is superior than providing value and adhering to generally accepted rules. When it comes to pricing the costs of providing the service is often the least important input.

Steve Balmer recently: If you look at these tax deductions for employer-provided health [...], theyre really subsidies to the affluent, which I guess I hadnt thought about them.

The biggest problem society faces at the moment is the vanishing middle class and lower qualified jobs that are still providing enough to subside. For the latter the cost of food, shelter, fuel and health are key. Lower the cost of living and there will be more jobs that are worth taking.

5
wojt_eu 1 day ago 2 replies      
If you go back to 1960 or thereabouts, corporate taxes were about 4 percent of G.D.P., Mr. Buffett said. I mean, they bounced around some. And now, theyre about 2 percent of G.D.P.

By contrast, he said, while tax rates have fallen as a share of gross domestic product, health care costs have ballooned. About 50 years ago, he said, health care was 5 percent of G.D.P., and now its about 17 percent.

6
shaqbert 1 day ago 2 replies      
Good news is that the amount of health care spending is a choice. E.g. other western countries are running a health care cost to GDP ratio from 10%-13%, sometimes offering vastly superior and equitable outcomes than in the US.

The bad news is that this is a choice Congress and Senate are taking on behalf of the American people. And with the partisan divide and lack of agreement on fundamental values, things won't really change.

Add in a rapidly aging population, and being cognizant of the per capita health care spending steeply increasing at later stages of life, kicking the can down the road won't make the later adjustment any easier...

7
british_india 1 day ago 2 replies      
The real problem with health care is that it's a gravy train for all involved. Doctors, who don't invent anything new and who just practice garden-variety medicine are wildly overpaid. They don't like it be known, but the average doctor earns a quarter-million a year. Totally unjustified.

Then there are the medical device manufacturers, big Pharma and the hospitals. They all are getting rich off the current system. That's what needs to change.

8
ransom1538 1 day ago 0 replies      
Americans unable to afford healthcare are just waiting in line to become bankrupt.

A study done at Harvard University indicates that this [medical costs] is the biggest cause of bankruptcy, representing 62% of all personal bankruptcies.

http://www.investopedia.com/slide-show/top-5-reasons-why-peo...

9
brohoolio 1 day ago 3 replies      
Healthcare costs create huge problems across the economy, increasing the cost of everything from manufacturing to higher education.

Between myself and my employer, it costs about $20,000 to insure my family a year. My employer shares much of the cost breakdown and it's interesting how goes to prescriptions and how much of that goes to specialty drugs to keep a handful of people alive.

$500 million dollar spend on healthcare.

$120 million goes to prescriptions.

$40 million of that went to specialty drugs, representing 1.7% of the prescriptions.

"The average ingredient cost of a single-source brand prescription increased by 14.9% in 2016 to an average $745 per prescription, mainly driven by high-cost specialty drugs. The average ingredient cost of multiple-source brand prescription increased by 49.5% to an average $585 per prescription. The average ingredient cost for a generic prescription decreased by 10.9% to an average cost of $34.04 per prescription. "

10
kauffj 1 day ago 1 reply      
The most insightful and thought provoking analysis of health care I've seen in the last several years is this one:

https://randomcriticalanalysis.wordpress.com/2016/09/25/high...

It argues that the high cost of health care in the United States is explained by its extreme wealth and that health care is a superior good (https://en.wikipedia.org/wiki/Superior_good).

This is a much different explanation than what is given by either political party.

Anyone have any points in support of or against this argument?

11
ralfd 1 day ago 1 reply      
Obligatory Slate Star Codex Link:

http://slatestarcodex.com/2017/02/09/considerations-on-cost-...

Scott Alexander examines "cost disease" in the sectors of health care and education.

12
CWuestefeld 1 day ago 0 replies      
By this argument, we should also be examining why we spend so much on education. In 2010, the United States spent 7.3 percent of its gross domestic product on education, compared with the 6.3 percent average of other OECD countries.

Surely spending dramatically higher amounts than other countries, with no better effects, is enough to drive us to consider how we can reduce the costs of education - and should make us think long and hard before considering proposals that we should throw even more money at this.

It's surely true that having a well-educated workforce improves productivity, but it's also true that having a healthy workforce does the same. I'm having trouble finding much difference between the two examples.

13
bischofs 1 day ago 0 replies      
I really think health care costs in the US are a byproduct of Americans obsession with convenience. Most lifestyles can be lived without any physical activity - the whole country is designed around the car.

Traveling around a bit I've seen other cultures still require people to walk somewhat to get places and people will also just go on "walks" whereas Americans will go for a "drive"

Food culture is also responsible, just jamming food into your face as quickly as possible rather than enjoying a meal is for sure and American thing.

Add all this up and you get 60% obesity rates in adults and getting worse.

14
psyc 1 day ago 0 replies      
There are no sane constraints on the prices. Instead of "price is what the market will bear," it's "the market will bear whatever price." This creates an irrational drain on the rest of the system. Whoever is sucking on that drain is doing well, though.

My grandfather used to get a shot that was $12,000 a pop, and didn't do anything.

15
pow_pp_-1_v 1 day ago 0 replies      
I had kind of hoped that Trump would stumble into single payer health care as a solution for healthcare needs of his base and would somehow get it passed through congress with support from democrats. Alas, nothing of that sort seems likely.
16
ivanhoe 1 day ago 0 replies      
Looking from the outside, it seems to me that the root of the problem is not about the health care in US, but about the prices of health related services and products? Prices are so inflated, hospital and medication bills are huge compared to what the same things costs in Europe or elsewhere.
17
ChicagoBoy11 1 day ago 0 replies      
Yes, this is an academic article, but I can think of no better resource for a pretty sane discussion on all the insanities in our healthcare system:http://faculty.chicagobooth.edu/john.cochrane/research/paper...

The thing that is pretty remarkable is that you unfortunately realize just how far either political party is in addressing any of the issues brought in the piece.

18
chaibiker 1 day ago 0 replies      
And entirely ignored seems to be the demand side of the question, why is that? Could expenses be higher than we want if we are less healthy that we should be? I see a lot of unhealthy habits associated with the subsequent associated costly interventions required. Just because we see no path forward to affect demand, leaving it out of the discussion will ensure the debate is framed only as which system can provide that volume of healthcare for a little more or a little less.
19
alistairSH 1 day ago 0 replies      
Question for y'all...Have your health care providers started being more conscious or out-spoken about cost? Mine definitely have, both for prescriptions and procedures.
20
faragon 1 day ago 0 replies      
Amazon could disrupt providing Health Care services in the US. Current price overhead versus over developed countries is huge, so an efficient provider could shock the whole HC market.
21
jcfrei 1 day ago 0 replies      
Would it be a worthwhile idea to open up health care internationally? Maybe insurance companies could create global standards for medical procedures, so that clients could choose in which countries they want to perform a procedure and then receive or pay the difference with regards to the cost of a national procedure. This could introduce some level of competition without jeopardizing quality - or am I missing something?
22
hudu 1 day ago 0 replies      
The pharma industry should open up; it is dominated by corporatism and their monopoly patents driving up prices, which make individual spending on drugs rise till an unaffordable level for the lower income.
23
moomin 1 day ago 0 replies      
Frankly, I just stubbed my big toe and that's a bigger problem than US corporation tax.
24
RichardHeart 1 day ago 0 replies      
Health care costs would be lower, if more people were able to provide health care service. If the world focused less on rate my sandwich apps, and more on fixing humans, the prices would be much more affordable.
25
ewood 1 day ago 0 replies      
Why the U.S. pays more for health care than the rest of the world

https://www.youtube.com/watch?v=gXBPKE28UF0

25
U.S. Census director resigns amid turmoil over funding of 2020 count washingtonpost.com
198 points by petethomas  1 day ago   133 comments top 18
1
DannyBee 1 day ago 5 replies      
It's funny how easy people think this is.

Yes it's "just a data collection app".

You're talking about an org that, if your address is in the middle of a swamp, they will send someone by boat to find you.

They take collection and processing very seriously.

If you gave a startup 200 million or whatever, you'd have a pretty accurate census of internet connected people in the top 20 cities in the US. Oh, and a declaration of victory from the startup, plus 300 billion in market cap in the hope that they may actually be able to count everyone someday!

Somehow, i doubt you'd come up with something that can collect data well, know where to be focusing field representatives, etc with 600k+ field representatives in a reasonable and efficient manner.

Has anyone here tried to organize people,targets, and data in say a company twice the size of all of IBM?How did that go?:)

(the census is honestly relatively cheap. it costs about 50 bucks per person, total. Obviously, counting rural, etc areas is the majority of the cost)

The 2020 Census is going to be weird, too.Disadvantaged minorities, etc are statistically less likely to be counted[1], and especially given the climate in the US, counting immigrants is going to be especially hard.

[1] Which is why in the US, they democrats often try to pass rules around using statistical methods, and the republicans claim it requires actual enumeration

2
John23832 1 day ago 4 replies      
Troubled funding for the census leads to easier gerrymandering.

Today had been full of surprises.

3
froindt 1 day ago 3 replies      
I'm curious what caused the cost of the new electronic system to increase so much. I understand there could be a lot to it on the back end to ensure privacy and anonymity of the data collected, but it doesn't seem like it should be a huge deal technically. We're talking about ~325 million people and collecting demographic info and address info [1]. The IRS has far far more variables to collect info on, however they have fewer people. 325 million people is nothing compared to scaled companies like Facebook, Amazon, or Google (far more data points per person, far more people).

Any speculation on cause or other considerations I'm missing? Did a quick search on Google News and didn't find anything. All the companies I listed have huge teams, but I am still not seeing how the cost has exploded.

https://www.census.gov/history/pdf/2010questionnaire.pdf

4
iloveluce 1 day ago 2 replies      
A bit unrelated but the US Census should really work with the US Postal Service when performing the census. It would save the Census bureau some money and would provide the Postal Service with an additional source of funding.
5
jssmith 1 day ago 1 reply      
My guess is that the contract under scrutiny to the RFP COMM-16-BC-2020. Here are the services to be provided:

 - Research and Data Analytics - Strategic Planning, Program Development, and Integration - Communications Support for Decennial Census Operations and Other Programs - Field Recruitment Advertising and Communications - Traditional Advertising and Media Buying - Digital Advertising and Other Communications Technologies - Social Media - Public Relations - Communications Planning and Materials for the Partnership Program - Statistics in Schools Program - Website Development and Digital Engagement - Rapid Response Activities - Project and Financial Management - Stakeholders Relations - Communications Support for the 2020 Census Data Dissemination
The scope here is pretty large and it includes advertising, etc., so one can see how it might cost hundreds of millions. It isn't just about setting up a database and running a web site.

Full document is here:

ftp://ftp.census.gov/about/business-opportunities/2020-comm-final-rfp-1-21-16.pdf

6
coldcode 1 day ago 1 reply      
Maybe they could just eliminate the census entirely and just have Congress pick whatever numbers they want. That would save a lot of money. \s
7
vtange 1 day ago 1 reply      
This is something to keep an eye on, given the fact that Congressional district borders will be drawn using 2020 data.
8
eric_b 1 day ago 5 replies      
650 million dollars to tally some simple demographic information? I understand there is always more to the story - but this doesn't pass the smell test. Additionally, what does the Census Bureau need 1.5 billion for on a non-census year?

I don't mean that rhetorically - 1.5 billion a year pays 15,000 people an annual salary of 100k. Where is that money going?

9
kbd 1 day ago 2 replies      
> And it comes less than a week after a prickly hearing at which Thompson told lawmakers that cost estimates for a new electronic data collection system had ballooned by nearly 50 percent.

I'd love to know the details of that project overrun.

10
throwaway5752 1 day ago 6 replies      
It's ridiculous this is the top trending submission when the president fired the FBI director that was investigating his ties to Russia. If political stories are fair game, this is important but a rounding error next to Comey.

@dang, the HN ranking system is so trivially hackable via downvotes and flags by a motivated minority.

11
3131s 1 day ago 0 replies      
The should do the census and register Americans to vote in the same process.
12
Rapzid 1 day ago 2 replies      
1bn is a bit much for a CRUD app... I'm exaggerating of course, but I believe not as much as they are on the costs..
13
xname2 1 day ago 0 replies      
Should the new electronic system really cost that much? Do they work with the SV, or will they just give the project to a terrible contractor like the health insurance exchange market?
14
swanson 1 day ago 4 replies      
Odds that Facebook ends up doing a privatized 2020 census?
15
jtedward 1 day ago 2 replies      
The above comment is an excellent example of political gaslighting, there is ample, undeniable evidence of some sort of GOP collision with Russia. No sane person looking at the evidence could come to any other conclusion, but by confidently stating the exact opposite of the truth the above commenter seeks to sow doubt in the mind of a potentially disinterested or confused audience. This is a an increasingly common tactic on these boards.
16
perseusprime11 1 day ago 0 replies      
Can we use Machine learning to predict 2020 count? Do we really need to manually count?
17
pfarnsworth 1 day ago 3 replies      
Honestly, why is it so expensive? If you created a startup with $50 Million, and gave them 4 years to implement this system, I'm sure it would be done much more efficiently. Then you "buy" the startup for $200 Million at the end of it, and all the employees get a nice payout.
18
pdog 1 day ago 2 replies      
Why hire nearly a million temporary census takers[1] every ten years? Even private companies have databases with far more detailed and more frequent demographic and psychographic information on every person in the United States.

[1]: https://www.census.gov/history/www/faqs/agency_history_faqs/...

26
Git 2.13 github.com
237 points by edmorley  1 day ago   68 comments top 9
1
freditup 1 day ago 4 replies      
Hidden in the notes at the bottom is a pretty useful improvement to 'git stash':

> 'git stash save' now accepts pathspecs. You can use this to create a stash of part of your working tree, which is handy when picking apart changes to turn into clean commits.

I believe there may be a slight error in the GitHub blog post I quoted above: from what I can tell, it's actually the 'git stash push' command that now accepts pathspecs. But either way, still a neat new feature!

2
ryanar 1 day ago 2 replies      
Oh I am really digging the change to allow directory level configs for config settings.

I have commited with my work credentials to open source projects more times than I can count

3
styfle 1 day ago 2 replies      
> git branch, git tag, and git for-each-ref all learned the --no-contains option to match their existing --contains option. This can let you ask which tags or branches don't have a particular bug (or bugfix).

I'm surprised that didn't exist already. Several years ago, I worked on a tool to scan SVN merge history and save in a graph database so one could ask this type of question, "Does this branch contain the fix?". Or the opposite, "Which branches do not contain this fix?".

It was a mess because there were 8 million commits in the repo and clients ranged from SVN 1.4 to SVN 1.8 (the server was upgraded too).

It would have made more sense to use git for something like that but it's hard to get thousands of devs to switch.

4
clumsysmurf 1 day ago 2 replies      
Anyone know what has happened to the OSX builds? The de-facto project 'git-osx-installer' doesn't have binaries after 2.10

https://sourceforge.net/projects/git-osx-installer/files/?so...

git-scm.com says:

"You are downloading version 2.10.1 of Git for the Mac platform. This is the most recent maintained build for this platform. It was released 7 months ago, on 2016-10-14."

https://git-scm.com/download/mac

5
0x0 1 day ago 0 replies      
The "includeIf" thing based on filesystem paths for setting the user.email gitconfig seems useful!
6
ReligiousFlames 22 hours ago 2 replies      
On git moving away from SHA1: it's about time.

- There shouldn't be too many nor too few hash algos. Too many: paradox of choice, user confusion and interop overhead. Too few: security monoculture risks being broken by well-funded state actors

- Sane, future-ready default: SHA3-512

Also, git GPG signing should change to signing content, in addition to or instead of, hashes.

7
nailer 14 hours ago 2 replies      
Anyone know where to get clean git builds for Windows without any extra crap like the "git for Windows" builds have?

- Windows comes with bash

- Microsoft have an excellent openssh implementation on their github

- I don't want some dude's cool $PS1 and shortcuts and icons.

I just want git.

9
brutopia 22 hours ago 3 replies      
Not related to this but please github choose a better font to render code in android browsers, the current one's unreadable!
27
A novel approach to neural machine translation facebook.com
274 points by snippyhollow  1 day ago   43 comments top 13
1
CGamesPlay 1 day ago 1 reply      
I'm relatively novice to machine learning but here's my best attempt to summarize what's going on in layman's terms. Please correct me if I'm wrong.

- Encode the words in the source (aka embedding, section 3.1)

- Feed every run of k words into a convolutional layer producing an output, repeat this process 6 layers deep (section 3.2).

- Decide on which input word is most important for the "current" output word (aka attention, section 3.3).

- The most important word is decoded into the target language (section 3.1 again).

You repeat this process with every word as the "current" word. The critical insight of using this mechanism over an RNN is that you can do this repetition in parallel because each "current" word does not depend on any of the previous ones.

Am I on the right track?

2
forgotmyhnacc 1 day ago 3 replies      
I really like that Facebook open sources both code and model along with the paper. Most companies don't: e.g. Google, deepmind, Baidu.
3
gavinpc 1 day ago 1 reply      
> Facebook's mission of making the world more open

That's a rather strong statement, for a company that has become one of the world's most complained-about black boxes.

But yes, they have done a lot of good in the computer science space.

5
pwaivers 1 day ago 2 replies      
As far I understood it, Facebook put lots of research into optimizing a certain type of neural network (CNN), while everyone else is using another type called RNN. Up until now, CNN was faster but less accurate. However FB has progressed CNN to the point where it can compete in accuracy, particularly in speech recognition. And most importantly, they are releasing the source code and papers. Does that sound right?

Can anyone else give us an ELI5?

6
deepnotderp 1 day ago 1 reply      
As far as I understand, only the use of the attention mechanism with ConvNets is novel, right? Convolutional encoders have been done before.
7
mrdrozdov 1 day ago 0 replies      
In this work Convolution Neural Nets (spatial models that have a weakly ordered context, as opposed to Recurrent Neural Nets which are sequential models that have a strongly ordered context) are demonstrated here to achieve State of the Art results in Machine Translation.

It seems the combination of gated linear units / residual connections / attention was the key to bringing this architecture to State of the Art.

It's worth noting that previously the QRNN and ByteNet architectures have used Convolutional Neural Nets for machine translation also. IIRC, those models performed well on small tasks but were not able to best SotA performance on larger benchmark tasks.

I believe it is almost always more desirable to encode a sequence using a CNN if possible as many operations are embarrassingly parallel!

The bleu scores in this work were the following:

Task (previous baseline): new baseline

WMT16 English-Romanian (28.1): 29.88WMT14 English-German (24.61): 25.16WMT14 English-French (39.92): 40.46

8
londons_explore 21 hours ago 1 reply      
This smells of "we built custom silicon to do fast image processing using CNNs and fully connected networks, and now we want to use that same silicon for translations. "
9
shriphani 1 day ago 0 replies      
I wonder if they can combine this with bytenet (dilated convolutons in place of vanilla convs) - gives you a larger FOV and add in attention and then you probably have a new SOTA.
10
pama 17 hours ago 0 replies      
This is a very cool development. Has anyone written a pytorch or Keras version of the architecture?
11
m00x 1 day ago 1 reply      
Does this mean that we're close to being able to use CNNs for text-to-speech?
12
esMazer 1 day ago 1 reply      
no demo?
13
danielvf 1 day ago 1 reply      
TLDR: Cutting edge accuracy, nine times faster than previous state of the art, published models and source code.

But go read the article- nice animated diagrams in there.

28
Azure Cosmos DB, a globally distributed database microsoft.com
263 points by andysinclair  12 hours ago   81 comments top 21
1
ahelwer 9 hours ago 3 replies      
Designed with TLA+! :D Small interview with Leslie Lamport:

https://techcrunch.com/2017/05/10/with-cosmos-db-microsoft-w...

Hope Cosmos team releases a whitepaper on their experiences with the language. I'd heard snatches of gossip here and there that TLA+ was used inside Cosmos, but no concrete details.

edit: apparently there's also a video of Lamport talking about this https://www.youtube.com/watch?v=L_PPKyAsR3w

2
jasondc 11 hours ago 4 replies      
> Latency: 99.99% of <10 ms latencies at the 99th percentile

Impressive SLA to guarantee, I'm curious if this will hold up in all random customer workloads that are coming, e.g. updating a lot of fields in a large document (or just a very large insert).

3
hoodoof 44 minutes ago 1 reply      
Does it provide search?

It's a strange thing, but almost all new database technologies seem to leave search as an afterthought for some later day instead of starting on day one with the assumption that "it's all about search".

A database system that doesn't support rich search capabilities is restricted to very limited types of applications.

Often search is left unimplemented for years, or perhaps never implemented.

4
dharmashuklaMS 10 hours ago 1 reply      
Hi, This is Dharma from Azure Cosmos DB team. We are super excited to make the service available today.We published the first of the series of technical blog posts here -> https://azure.microsoft.com/en-us/blog/a-technical-overview-.... Would love to answer any Cosmos DB questions.
5
judah 12 hours ago 2 replies      
This isn't a new database. This is a rebranding of the generically-named Azure DocumentDB, plus some new features.
6
voellm 2 hours ago 0 replies      
One of the best parts of the perf SLA is we did it with all Data Encrypted at Rest. I'm biased. I lead security for CosmosDB.
7
hoodoof 53 minutes ago 0 replies      
Databases is one area that Amazon is way behind Google and Microsoft. DynamoDB is thoroughly awful so good to see some competition.
8
joshuatalb 10 hours ago 1 reply      
This feels like MS' version of Google's Cloud Spanner that's GA in a few days. Same kind of marketing too.
9
yunong 2 hours ago 2 replies      
What's the CAP tradeoffs of Cosmos? It's not clear to me looking at the SLA docs.
10
henriksen 11 hours ago 0 replies      
Talk about the foundations of Cosmos DB: https://www.youtube.com/watch?v=Yfmw7swCtZs
11
lobster_johnson 7 hours ago 0 replies      
Is Cosmos related to the work on Corfu/CorfuDB [2] [1] in any way?

[1] https://www.microsoft.com/en-us/research/publication/corfu-a...

[2] https://github.com/CorfuDB/CorfuDB

12
willchen 10 hours ago 1 reply      
Very interesting DB service. If I'm reading the docs right it sounds like you can't do JOINs across documents?

https://docs.microsoft.com/en-us/azure/documentdb/documentdb...

13
tracker1 9 hours ago 1 reply      
From the intro page[1]... Many of the descriptions comparing to NoSQL are wrong. There are plenty of NoSQL options that have similar features, though it isn't universal, it can and often is there. Cassandra, for example, probably does just as well in multi-zone/dc concurrency. Consistency options are also similarly tunable. Cockroach 1.0 was announced earlier as well.

It's not that I don't appreciate the option. This seems far closer to what DocumentDB should have been earlier on. Though tbh, I think Storage Tables are already pretty useful.

[1] https://docs.microsoft.com/en-us/azure/cosmos-db/introductio...

14
rattray 10 hours ago 1 reply      
I find this product slightly befuddling.

It seems like a "just throw all your data in this" kind of database, probably intended for everything but core application relational data (so, good for analytics, messaging, etc).

It sounds like the atom-record-sequence model at the heart of it is pretty key, but there's not a lot in the article about what that is and how it works. Is this a well-understood data structure used elsewhere?

The project seems very ambitious, and I could see it being used pretty heavily at a lot of companies. Thoughts?

15
dagi3d 10 hours ago 1 reply      
If I understood it correctly, they mentioned they offer horizontal scalibility for their databases and I wonder how does it work for the graph data model
16
VikingCoder 11 hours ago 3 replies      
I'd like to see an in-depth comparison with Google Cloud Spanner.
17
dmarlow 11 hours ago 1 reply      
I want a competitor to Google Cloud SQL in Azure.
18
martinknafve 11 hours ago 3 replies      
No backup/restore?
19
chaostheory 11 hours ago 3 replies      
20
Dimi9909 11 hours ago 0 replies      
is this similar to DynamoDB in AWS?
21
anand_MSFT 11 hours ago 0 replies      
See https://www.youtube.com/watch?v=Yfmw7swCtZs by Turing Award Winner, Dr. Leslie Lamport, as he talks about Azure Cosmos DB
29
Volta: Advanced Data Center GPU nvidia.com
214 points by abhshkdz  11 hours ago   122 comments top 16
1
grondilu 10 minutes ago 0 replies      
I was wondering if this will be used in supercomputers. Apparently yes:

> Summit is a supercomputer being developed by IBM for use at Oak Ridge National Laboratory.[1][2][3] The system will be powered by IBM's POWER9 CPUs and Nvidia Volta GPUs.

https://en.wikipedia.org/wiki/Summit_(supercomputer)

Summit is supposed to be finished in 2017, though. I'm quite surprised this is possible since the Volta architecture has only just now been announced.

2
gigatexal 10 hours ago 3 replies      
These tensor cores sound exotic:"Each Tensor Core performs 64 floating point FMA mixed-precision operations per clock (FP16 multiply and FP32 accumulate) and 8 Tensor Cores in an SM perform a total of 1024 floating point operations per clock. This is a dramatic 8X increase in throughput for deep learning applications per SM compared to Pascal GP100 using standard FP32 operations, resulting in a total 12X increase in throughput for the Volta V100 GPU compared to the Pascal P100 GPU. Tensor Cores operate on FP16 input data with FP32 accumulation. The FP16 multiply results in a full precision result that is accumulated in FP32 operations with the other products in a given dot product for a 4x4x4 matrix multiply,"Curious to see how the ML groups and others take to this. Certainly ML and other GPGPU usage has helped Nvidia climb in value. I wonder if Nvidia saw the writing on the wall so to speak with Google releasing their specialty hardware called the Tensor hardware that Nvidia decided to use it in their branding as well.
3
arca_vorago 9 hours ago 4 replies      
More great hardware being stuck behind proprietary CUDA when OpenCL is the thing they should be helping with. Once again proprietary lock in that will result in inflexibility and digital blow-back in the long run. Yes I understand OpenCL has some issues and CUDA tends to be a bit easier and less buggy, but that doesn't detract from the principles of my statement.
4
hesdeadjim 10 hours ago 5 replies      
I find it so cool that technology created to make games like Quake look pretty has ended up becoming a core foundation of high performance computing and AI.
5
mattnewton 10 hours ago 3 replies      
Wow, this is just Nvidia running laps around themselves at this point. Xenon Phi still not competitive, AMD focused on the consumer space, looks like the future of training hardware (and maybe even inferencing) belongs to Nvidia. (Disclosure: I am and have been long Nvidia since I found out cudnn existed and how far ahead it was)
6
bmiranda 10 hours ago 1 reply      
815 mm^2 die size!

That's at the reticle limit of TSMC, a truly absurd chip.

7
randyrand 10 hours ago 1 reply      
What are the silver boxes that line both sides of the card? Huge Capacitors?
8
tobyhinloopen 10 hours ago 3 replies      
Time to play some games on it
9
braindead_in 7 hours ago 0 replies      
So when are the new AWS instances are coming?
10
arnon 7 hours ago 1 reply      
This is odd for NVIDIA.They usually push out revised versions in the second year, not change the entire architecture to the new one.

Feels like they're feeling AMD breathing down their necks with their VEGA architecture, which should be very interesting.

AMD have also stepped up their game with ROCm which might take a chunk out of CUDA.

11
Symmetry 9 hours ago 0 replies      
I wonder if the individual lane PCs will pave the way for implementing some of Andy Glew's ideas for increased lane utilization in future revisions?

http://parlab.eecs.berkeley.edu/sites/all/parlab/files/20090...

12
1024core 9 hours ago 2 replies      
FTA: "GV100 supports up to 6 NVLink links at 25 GB/s for a total of 300 GB/s."

The math doesn't add up.

13
Athas 7 hours ago 1 reply      
Does this architecture improve on 64-bit integer performance? Have any of the GPU manufacturers said anything about that? At some point it becomes a necessity for address calculations on large arrays.
14
lowglow 8 hours ago 2 replies      
I'm really happy our startup didn't go all in on Tesla (Pascal architecture) yet. These look amazing.
15
gwbas1c 8 hours ago 1 reply      
How long until Tesla sues for trademark infringement? "from detecting lanes on the road to teaching autonomous cars to drive" makes it sound like there is an awful lot of overlap in product function.
16
caenorst 9 hours ago 1 reply      
Did they communicate any release date and price during the show ?
30
How Privacy Became a Commodity for the Rich and Powerful nytimes.com
223 points by tysone  1 day ago   96 comments top 9
1
client4 1 day ago 1 reply      
I actually wrote a House Bill for Montana in 2013 that would have been a pretty comprehensive privacy law for the state. My friend Dan tried his best to get it passed, but it was a bit too much new code for legislators to stomach. Thankfully Dan has broken the original legislation down into smaller parts and has really succeeded in improving privacy in Montana.

https://legiscan.com/MT/text/HB400/2013

2
danieldk 1 day ago 8 replies      
I think the article touches upon a key problem: even if some people are in principle willing to sacrifice some privacy in order to get a product for free, it should be required to state what data is shared with whom in clear human language (and not in a 20 page wall of legalese).

The relation between the user and a service is now completely asymmetrical: it is hard to know what your data is used for. It does not help that the legalese often boils down to 'you will sell your soul'.

3
bithive123 1 day ago 3 replies      
The idea that privacy can be traded away transactionally is a misrepresentation. Privacy is a choice; depending on who I am interacting with I will withhold certain information about my life.

If someone convinces a friend or family member of the lie that they can trade privacy for services, then my communications with them are compromised without my consent.

This is all about every being's right to choose to be private. The idea that it's okay to impinge this right as long as someone thought of it as a transaction is morally bankrupt.

4
theprop 1 day ago 1 reply      
"Facebook revoked users ability to remain unsearchable on the site; meanwhile, its chief executive, Mark Zuckerberg, was buying up four houses surrounding his Palo Alto home to preserve his own privacy. Sean Spicer, the White House press secretary, has defended President Trumps secretive meetings at his personal golf clubs, saying he is entitled to a bit of privacy,

That said, privacy is being commoditized for everyone as well with tools such as Snapchat, the Epic Privacy Browser and TOR.

5
ajack46 1 day ago 5 replies      
How can one trust someone with their email account. I believe that is just a weird thing to do no matter whether it was not such a threat before. But your email is something personal.
6
sixothree 1 day ago 1 reply      
What I want to know is - How can I buy the information marketers have collected about me?
7
chickenfries 1 day ago 0 replies      
Has anyone implemented something similar to Unroll.me, except not SaaS? Perhaps as a browser extension?
8
davidgerard 1 day ago 1 reply      
Fake title. Real title (and URL): How Privacy Became a Commodity for the Rich and Powerful
9
ckastner 1 day ago 1 reply      
Last month, the true cost of Unroll.me was revealed: The service is owned by the market-research firm Slice Intelligence, and according to a report in The Times, while Unroll.me is cleaning up users inboxes, its also rifling through their trash. When Slice found digital ride receipts from Lyft in some users accounts, it sold the anonymized data off to Lyfts ride-hailing rival, Uber.
       cached 11 May 2017 04:11:02 GMT