hacker news with inline top comments    .. more ..    13 May 2017 Best
home   ask   best   10 months ago   
1
Get started making music ableton.com
2069 points by bbgm  3 days ago   457 comments top 53
1
hxta98596 3 days ago 7 replies      
Anecdotal: there's a few different approaches to learning songwriting that seem to click for beginners. The "build up" approach is the most common and is what this link offers: It first teaches beats, then chords, then melodies and then, in theory, vocals etc. These lessons in this order make sense to many people, but not everyone.

If you're interested in learning to make music and the lessons in the link are confusing or overwhelming or boring, some students find a "peel back" approach to learning songwriting easier to grasp at first. A peel back approach just involves finding a song then teaching by stripping away each layer: start with stripping away vocals, then learn melodies, then chords, then finally learn about the drum beat underneath it all. A benefit of the peel back approach to learning is melodies and vocals are the memorable parts of a song and easiest to pick out when listening to the radio so a student can learn using songs they know and like. Either way, songwriting is hard and fun. Best of luck.

P.S. I think Ableton makes good software and I use it along with FL and Logic. They did a solid job with these intro lessons. But worth mentioning, there is free software out there (this includes Apple's Garageband) that offers key features a beginner just learning songwriting can practice on and mess around on before purchasing a more powerful DAW software like Ableton.

2
djm_ 3 days ago 0 replies      
For those wondering, this is made with Elm lang, Web Audio & Tone.js [1]

[1] https://twitter.com/AbletonDev/status/861580662620508160

3
tannhaeuser 3 days ago 31 replies      
I always wondered why musicians keep up with the conventional musical notation system, and haven't come up with something better (maybe a job for a HNer?).

I mean the conventional music notation represents tones in five lines, each capable of holding a "note" (is that the right word?) on a line, as well as in between lines, possibly pitched down and up, resp., by B's and sharps (depending on the tune etc.).

Since western music has 12 half-tone steps per octave (octave = an interval wherein the frequency is doubled, which is a logarithmic scale so compromises have to made when tuning individual notes across octaves) this gives a basic mismatch between the notation and eg. the conventional use of chords. A consequence is that, for example, with treble clef, you find C' in the top but one position between lines, and thus at a very different place than C (one octave below) visually, which is on, rather than between, an additional line below the bottom-most regular line.

I for one know that my dyslexia when it comes to musical notation (eg. not recognizing notes fast enough to play by the sheet) has kept me from becoming proficient on the piano (well, that, and my lazyness).

4
JasonSage 3 days ago 11 replies      
This is some good coverage of the music theory behind songwriting, which is important in making songs that sound good.

However, there's another part of making music which is not covered at all here, which is the actual engineering of sounds. Think of a sound in your head and recreate it digitallyit'll involve sampling and synthesizing, there's tons of filters and sound manipulation to go through, they all go by different names and have different purposesit's a staggering amount of arcane knowledge.

Where is the learning material on how to do this without experimenting endlessly or looking up everything you see? I want a reverse dictionary of sorts, where I hear a transformation of a sound and I learn what processing it took to get there in a DAW. This would be incredibly useful to learn from.

5
2845197541 3 days ago 6 replies      
This seems like the wrong place to start. This seems like the place to start learning a DAW and snapping together samplesto, IMO, make depersonalized unoriginal loop music in a society awash with it because DAW's and looping have created an angel's path to production and proliferation. Learn to drag and drop and you can tell people you meet that you're a musician or a producer. I've met too many mediocre people like this. There should be a disclaimer when this page loads: learn to play an instrument first. Bringing forth music from a physical object utilizes the body as well as the mind, attunes to nuance, and emphasizes that music is primarily a physical phenomenon. It's also just fun and you can jam with or perform for friends. This cut and paste and drag and drop and sample and loop mentality popularized by the rise of hip-hop has lead to an oversaturation of homogeneous, uninspired, unoriginal sound in society. Maybe I'm old fashioned but I think people should spend long, frustrated hours cutting and blistering their fingers for the craft, at least at first. That builds character and will show in your music as you move on.
6
exabrial 3 days ago 1 reply      
Guys if you haven't seen Sonic PI (http://sonic-pi.net/), this is also a great tool! You can write beats using a Ruby DSL and it runs them real time.

I sat down and did this in an hour: https://github.com/exabrial/sonic-pi-beats/blob/master/house...

Sam Aaron is the guy behind the project, he does a lot of ambient type stuff: https://www.youtube.com/watch?v=G1m0aX9Lpts

7
adamnemecek 3 days ago 13 replies      
I'm actually working full time on a new DAW that should make writing music a lot faster and easier. Current DAWs don't really understand music. Also the note input process and experimentation is extremely time consuming and the DAW never helps. Current DAW : my thing = Windows Notepad : IDE. The HN audience is definitely one of my core groups.

If you are interested, sign up here https://docs.google.com/forms/d/1-aQzVbkbGwv2BMQsvuoneOUPgyr... and I'll contact you when it's released.

8
jarmitage 3 days ago 1 reply      
Check out Jack Schaedler who works in this at Ableton https://jackschaedler.github.io/

He even made an interactive essay about the GRAIL text recognizer from the 1960s https://jackschaedler.github.io/handwriting-recognition/

9
puranjay 3 days ago 11 replies      
I'm an amateur musician and one of the things I hate about electronic music is how "distant" it all feels.

I'm used to picking up the guitar, playing a few chords and writing a melody.

Ableton (or any other DAW) feels like a chore. I have to boot up the computer, connect the MIDI keyboard, the audio interface and the headphones, then wait for Ableton to load, then create a new track and add a MIDI instrument before I can play a single note.

I know the sessions view in Ableton was an attempt to make the music feel more like jamming, but it doesn't really work for me. A lot of musicians who play instruments I've talked to feel the same way.

I would love an "Ableton in a box" that feels more intuitive and immediate.

10
fil_a_del_fee_a 3 days ago 2 replies      
I purchased the Ableton Push 2 a month or so ago and it has to be one of the most beautifully engineered pieces of equipment I have ever used. Look up the teardown video. Extremely simple, yet elegant. The Push 1 was created by Akai, and apparently Ableton wasn't satisfied, so they designed and built their own.

https://www.youtube.com/watch?v=YItWQdJgXLs

11
radiorental 3 days ago 2 replies      
Related, this is trending on reddit this morning. Just fascinating to watch someone build a catchy track up on such a (apparently) basic piece of equipment...

https://www.youtube.com/watch?v=FK5cU9qWRg0

12
Mister_Snuggles 3 days ago 3 replies      
As someone who has no musical talent whatsoever, I'm oddly intrigued by Ableton's products. I've occasionally stumbled across the Push[0] and been fascinated by it as an input device.

This site is another thing to add to my Intriguing Stuff list.

[0] https://www.ableton.com/en/push/

13
thatwebdude 3 days ago 1 reply      
Get Started Making Music (In Ableton Live).

Love the simplicity, though it does seem to favor EMD (for obvious reasons).

I've always loved the idea of using Live in a live improvisation context, potentially with multiple instruments having their own looping setup; or just a solo thing. It's hard to find that sort of thing, though.

Checking out Tone.js now.

14
gcoda 3 days ago 0 replies      
They put Tone.js to good use.Promoting Ableton by showing what cool stuff you can do with free js library that can work in browser, weird?https://tonejs.github.io
15
nonsince 21 hours ago 0 replies      
I did music at GCSE and A-level so I knew about a lot of the basic theory here, but it's fallen out of use in the past year or two. The best part of this by far was the deconstruction of tracks that I like into their components and realising that they're not insurmountably complicated. Kinda like a musical version of "you could have invented monads".
16
pishpash 3 days ago 0 replies      
To all the people complaining, I feel you. There is not one tool that takes you through the entire workflow of making music well, but they sell software pretending they do support the entire workflow. In truth, you write and arrange in specialized notation software, create samples in specialized synthesis software, or record live audio, then you use audio workstations to fix, edit, transform, and mix. Even there you may rely on external hardware or software plugins. These tools aren't meant for a one-person creator. They mimic the specializations in the music industry. A good all-in-one software simply does not exist, and small teams trying to work on these projects are trying to bite off a real big pie. It's very complex and requires a lot of specialized knowledge, and many of the pieces are probably patent-encumbered, too. But good luck!
17
ahoglund 3 days ago 0 replies      
This looks strangely similar to a collaborative app I made last year with Elixir/Elm/WebAudio API:

https://www.youtube.com/watch?v=TCVuLh5Io9A

18
calflegal 3 days ago 2 replies      
The timing of this post is funny, as just this week I launched a little ear training game built with React an Tone.js: https://www.notetuning.com/
19
geoffreyy 3 days ago 0 replies      
The first page of that tutorial reminded me of a product I saw at the Apple store a few weeks ago called Roli. They have a great app [0], but the hardware [1] itself is not ideal but unfortunately necessary to unlock some features... I will be waiting for a v2...

[0] https://roli.com/products/noise

[1] https://www.apple.com/shop/product/HKFR2VC/A/roli-lightpad-b...

20
6stringmerc 2 days ago 0 replies      
Over the years I like to think Ableton has been at the forefront of the digital music community (at least among the pack like Korg), at a special nexus of hardware, software, VST developers, and global sharing by way of an incredibly robust and deep Live Suite program. Seeing the firm continue to reach out and share community resources is habitual for them, and I'm very pleased to see this get all sorts of attention from this community. The intersection of Technology and Art is a bright, multi-cultural future, and with that comes responsibility. To put it in a phrase, this is an example of Ableton providing a ladder up to new members, rather than slamming the door behind them once a certain level was reached. Enjoy!
21
ilamont 3 days ago 2 replies      
I was looking for an app like this for my son. He started with "My Singing Monsters" and some music lessons at school, but when I tried to get him into Garage Band it was too much for a beginner.

Thank you to the creator ... I will show it to him later today. I am not sure how far he can take it, but I like what I have seen so far.

Also, if anyone has other suggestions for music-making apps for tween kids I am all ears ...

22
stevenj 3 days ago 0 replies      
I think the design of this is really interesting.

It's designed in a way to make the user (e.g. anyone who likes music) just want to play with it in a way that's very intuitive via its simple, visual layout. And it provides instant feedback that makes you want to continually tinker with it to make something that you like more and more.

Web development/programming training tool makers should really take note of this.

23
dyeje 3 days ago 1 reply      
Wow this is super high quality content. Props to Ableton. By far my favorite DAW, but I wish they would come out with a cheaper license.
24
meri_dian 3 days ago 3 replies      
I can't speak for other DAW's, but Ableton was really easy for me to pick up as a complete novice to digital music production
25
tomduncalf 2 days ago 0 replies      
Off topic, but I posted the exact same link about 24 hours earlier: https://news.ycombinator.com/item?id=14291332

Not that it's important but I'm kinda curious why a. my submission would only get 7 points and b. how it was possible for someone else to submit the same link so soon after and gain the points rather than my submission getting boosted?

It it just random chance/time of day of posting? Or is it because the user who posted this had more points to start with and so was more likely to be "noticed"?

Awesome site in any case!

26
hmage 3 days ago 0 replies      
I noticed many people commenting here think there's only one page.

There's more -- scroll down and click next.

27
alxdistill 3 days ago 0 replies      
Like any technology there can be lots of different inputs and outputs. I think it is safe to say that Roland and the TR808, 909, 303 changed music notation, and music forever, with their popularization of grid based music programming. It may be that Ableton is doing the same with their software. Each year the tools get better to do these sorts of creative activities. The Beatles recorded Abbey Road on a giant 4 track expensive four track owned by a record label. In 1995 I saved up my money from a summer job and bought a 4-track cassette recorder for about $500. Now you can get a four track app for you mobile phone for about $5. Or download an open source one for free.

YAY :)

28
skandl 3 days ago 0 replies      
This is beautiful and amazing. I love how each step builds on the previous, and uses pop examples to explain theory concepts. I've often wondered so many of the things presented in this, particularly around what common characteristics a genre has with respect to rhythm! Big kudos to the team who built this. I'd love to learn about the development backstory, as this feels a lot like an internal sideproject made by passionate individuals and less like a product idea dreamed up with requirements and specs.
29
bbreier 3 days ago 1 reply      
Myself and two friends have tried to make music production easier (and more robust) on the phone in our spare time, and came up with our iPhone app, Tize (https://itunes.apple.com/us/app/tize-make-music-beats-easy/i...), to that end.

If it sounds like something you're interested in please give it a go! We're always working to improve it and open to feedback. (Android is coming soon)

30
PeanutNore 3 days ago 0 replies      
I've been using Ableton Live for about a week after getting a free copy with the USB interface I bought (Focusrite Scarlett 2i2, highly recommend) and I had to turn to YouTube to figure out how to actually sequence MIDI drums in it.

I use it pretty much solely for recording, but I take advantage of the MIDI sequencer functions to program in a drum beat instead of recording to a click, because I've found my timing and rhythm is so much better playing to drums than it is just playing to a metronome.

31
WWKong 3 days ago 0 replies      
I wanted to build something similar for mobile to make music on the go. I started it here (abandoned now, but code is linked): http://buildanappwithme.blogspot.in/2016/04/lets-make-music....
32
guruz 3 days ago 0 replies      
I think I've watched this video a ton of times: https://www.youtube.com/watch?v=eU5Dn-WaElI

That guy is using Ableton Live to re-create a popular song of The Prodigy.

33
schemathings 3 days ago 0 replies      
If you want to get an interesting take on the 'Live' part of Ableton Live, look for 'Kid Beyond Ableton' videos. He builds up tracks live on stage by beatboxing all the instruments, and uses something called a Hothand recently as his controller.
34
dsmithatx 3 days ago 0 replies      
Did this get voted 1023 points (so far) because, it's a great article or does everyone love music? Btw, I use Ableton after my Pro Tools rig was stolen and, I'm buying a new MatrixBrute. I can't wait to checkout this site.
35
whiddershins 3 days ago 6 replies      
Ableton Live is my main daw. I use it every day, generally for hours, and for a wide variety of purposes.

The most depressing thing about ableton is made obvious in two seconds of messing with that tutorial. A complete disregard for music in the sense of pushing boundaries of time, or doing things that are not tied to any sort of grid, and the sense of music as an emotive form.

So many aspects of music are very annoying or borderline impossible to do in ableton. Yet in all these years, and with so many installations, they just never addressed those issues. Instead they vaguely pretend as if music that would require features they don't have is radically experimental. Which might become true if so many people learn music only through using their software.

Seriously, Ableton. Stop pretending making music is clicking on and off in little boxes. It's embarrassing.

--

Edited to take out the "art" part and put in a couple of more specific criticisms.

36
nialv7 1 day ago 0 replies      
It'd be nice if we can share the stuff we make in the playground with friends.
37
clarkenheim 3 days ago 1 reply      
Similar concept using Daft Punk samples instead: http://readonlymemories.com/ plus some filtering and looping capability.
38
viach 3 days ago 0 replies      
It reminds me "Generative Music Otomata" http://www.earslap.com/page/otomata.html
39
rubatuga 3 days ago 0 replies      
This is extremely comprehensive for any beginner/intermediate musician/composer, and I'm really impressed at how they managed to implement the content in a mobile friendly manner!
40
ablation 3 days ago 0 replies      
Love it. Great web app from a really good company. I use Ableton a lot and I'm continually impressed with their software and content marketing activity.
41
tommynicholas 3 days ago 0 replies      
I used to be a professional musician and I've used a lot of real Ableton equipment and I still found this incredibly interesting and fun.
42
moron4hire 3 days ago 0 replies      
This is really awesome. They really went the extra mile on building this out. It even supports multi-touch screens. Very well done.
43
markhall 3 days ago 0 replies      
Wow, this is super impressive. I fell in love after adding a few chords over drums. Amazing.
44
mayukh 3 days ago 2 replies      
Wow, this looks great. Is there an app for this? I'd love for my son to try.
45
xchip 2 days ago 0 replies      
This is AWESOME! Sharing it with all my friends!

Thanks OP!

46
pugworthy 3 days ago 0 replies      
So much for being productive today...
47
octref 3 days ago 0 replies      
Yep, not using hottest framework, not a SPA, not a PWA. Just something that loads fast and works great. Good job.
48
gowk 2 days ago 0 replies      
That's fantastic!
49
duggalr2 3 days ago 0 replies      
This is amazing!
50
hashkb 3 days ago 6 replies      
This is not the basics of making music. It's a super advanced technique using a computer. The real basics involve pencil, (staff) paper, and hard work. Downvotes please.
51
uranian 3 days ago 5 replies      
A more appropriate title would be: Get started triggering samples.

Making music is really something different IMO.

52
_pmf_ 3 days ago 0 replies      
Amazing presentation. Concentrates on the content, works on mobile[0], no bullshit effects.

[0] within the constraints of Android's embarrassingly crappy audio subsystem

53
greggman 2 days ago 0 replies      
Am I missing something? I went through all the tutorials and AFAICT there isn't much here. It seemed like "here's a piano. Here's some music made on the piano. Now bang on the piano. Fun yea?"

Is there really any learning here? Did I miss it? I saw the sample songs a few minor things like "major chords are generally considered happy and minor sad" etc... but I didn't feel like going through this I'd actually have learned much about music.

I'm not in anyway against EDM or beat based music. I bought Acid 1.0 through 3.0 back in the 90s which AFAIK was one of the first types of apps to do stuff like this. My only point is I didn't feel like I was getting much learning in order to truly use a piece of software like this. Rather it seemed like a cool flashy page but with a low content ratio. I'm not sure what I was expecting. I guess I'd like some guidance on which notes to actually place where and why, not just empty grids and very vague direction.

2
A crashed advertisement reveals logs of a facial recognition system twitter.com
1399 points by dmit  2 days ago   512 comments top 62
1
kimburgess 2 days ago 29 replies      
You'd be surprised / scared / outraged if you knew how common this is. Any time you've been in a public place for the past few years, you've likely been watched, analysed and optimised for. Advertising in the physical world is just as scummy as it's online equivalent.

Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.

2
anabis 2 days ago 5 replies      
In Japan at least, before automated facial recognition, cashiers recorded buyer demographics by hand. I would think other places do it too.

Edit: Here is what the buttons look like. Gender and age.https://image.slidesharecdn.com/hvc-c-android-prototype20141...

3
samtho 2 days ago 1 reply      
During 2010-2012, I was part of a startup called Clownfish Media. We basically created something very similar to this and got scary accurate results then. Given how accessible computer vision has become, the image in the tweet comes at no surprise to me.

Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).

Next time you look at digital signage, just be aware that it is probably looking back at you.

4
_-_T_-_ 1 day ago 2 replies      
(Supposedly) Lee Gambles comment on Reddit -

"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."

Source: https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...

5
gedrap 1 day ago 3 replies      
To be honest, I am fairly surprised at the reaction here on HN. It's not really surprising to see such system, it would be more surprising if such system did not exist because offline ads is a huge business and the technology is here. This goes together with conversion tracking at physical shops, etc.

I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.

I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.

6
ffriend 1 day ago 14 replies      
As someone working on a similar project (specifically, emotion recognition) I'm highly interested to hear how such a product should look like to be not considered unethical. So far from the comments I see that:

- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera

- no raw data should be stored

- it should be used to collect statistics, not identify individuals (?)

Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?

7
cyberferret 2 days ago 2 replies      
Uh, I got sidetracked and brain hammered by the devolving discussion on that Twitter thread, thus couldn't find the context for this pizza shop kiosk - Is it a customer service portal that attempts to identify the person in front of it to try and match up with an order, or a plain advertising display that is trying to capture the demographics of the people who happen to stop in front of it and look at it?
8
tjpnz 2 days ago 6 replies      
It saddens me that there are people in our profession for whom implementing such a thing presents no moral or ethical dilemma.
9
dsmithatx 1 day ago 0 replies      
Here is the software the sign is using.

http://www.adflownetworks.com/audience-detection/

10
korethr 2 days ago 2 replies      
Though not at the level depicted in the movie, I am nonetheless reminded of Minority Report.
11
treyfitty 1 day ago 2 replies      
I've sifted through 80% of the comments here, and I couldn't find a mention about the unintended consequences of this technology.

Ethical vs. Unethical, Pro-Privacy vs. Against Privacy are the two common discussion points. I, however, think the bigger problem here is that there's a very non-zero probability that this technology may cause unintended consequences simply by relying on false/inaccurate data.

For one, I work in analytics (loaded catch-all occupation) and I work with people who would marry their "data skills" if they could. In my industry, false positives of 80% is acceptable, and openly admitted errors in "machine-learning" logic (quoted to highlight my company's buzz-word usage, but practically non-existent) are made daily. People create algorithms, and people make errors.

Let's let our imagination run wild here for a second: It's 2030, and this technology becomes ubiquitous to the point where no one objects. Businesses take all the data from sentiments, gender, age...etc. to optimize for their target demographic, and price accordingly. In other words, let's assume this tech is used for perfect price discrimination. Economic theory dictates this is a win-win for everyone since everyone starts paying their willingness to pay. But, let's assume there's a catastrophe and medicine is in dire need. Price discrimination works fine assuming perfect competition, and is a useful framework, but it breaks down empirically where we live in a society that doesn't behave so rationally. Who survives? Those willing to pay the most, and the algorithm worked flawlessly here. But it was not intended to dictate who survives.

What I'm trying to say is that we should be cognizant of the fact that we don't live in a perfect bubble, and technology like this should be scrutinized for it's effects exhaustively- including any unintended consequences. We live in a society (duh), and as a society, it is up to us, with the help of policy makers, to determine the fate of this technology.

12
pavement 2 days ago 6 replies      
Okay, time to start wearing ski masks and Santa Claus costumes in public at all times.
13
spangry 1 day ago 1 reply      
I'm a little surprised about the HN reaction on this one. You guys didn't seem to care about collection of passive biometrics a year ago: https://news.ycombinator.com/item?id=11172652 . What's changed?
14
nsgi 1 day ago 0 replies      
For an even more disturbing idea, Titan Park in Beijing has machines in its toilets which scan peoples' faces before dispensing toilet paper.

https://nakedsecurity.sophos.com/2017/03/21/park-uses-facial...

15
DonHopkins 2 days ago 0 replies      
There should be one of these in front to the ad to give passers by more privacy:

http://hackaday.com/2010/10/15/window-curtain-moves-to-scree...

16
alkonaut 1 day ago 0 replies      
This is in itself not scary compared to what a random website does when I visit. That is - given that what we see in this log is actually all it does. What's scary is what we don't see (does it store this? does it cross reference anything? does it target ads based on it)?

I don't really think that's the case (here, yet) but I do think it's scary that it's so easy to do that its not just done as a proof of concept but actually used in production in a low tech industry.

Gathering demographic or sentiment without storing, cross referencing (has this person been here before etc) or otherwise using the data for anything such as targeting ads - is kind of acceptable. I mean it wouldn't be hard to do that manually via a camera if you wanted to test the engagement of an ad. I'm sort of hoping this is just some tech project from a university or something, and not an actual product you can buy and hook into some adtech service.

Edit: as someone else pointed out - it's not a proof of concept it's an adtech off the shelf product. Because of course :( http://www.adflownetworks.com/audience-detection/

17
Kliment 1 day ago 1 reply      
And just today I got an ad (in the paper mail) from an electronics distributor notifying me of new parts they stock. Among them was an embedded face and expression recognition engine that would emit pretty much this data, in a convenient text output you can read into any little microcontroller and act on (omron B5T-007001-010 if anyone is interested). This is no longer exciting cutting edge technology, it's off the shelf. And terrifying.
18
clydethefrog 1 day ago 0 replies      
For consideration :

How to ZAP a Camera:Using Lasers to Temporarily Neutralize Camera Sensors

http://www.naimark.net/projects/zap/howto.html

19
spsful 2 days ago 1 reply      
Oh look, Ghostery's product pitch comes in handy nowhttps://www.youtube.com/watch?v=EKzyifAvC_U
20
josteink 1 day ago 0 replies      
A norwegian article on the subject: http://www.dinside.no/okonomi/reklameskilt-ser-hvem-du-er/67...

Google translate is readable, if not super-mega-accurate: https://translate.google.com/translate?sl=auto&tl=en&js=y&pr...

21
kirykl 2 days ago 0 replies      
With this much personal care to really know their customers by face I'm sure they put just as much personal care into the quality and craft of the product /s
22
gech 2 days ago 0 replies      
Despicable. Any authors of this work should be publicly shamed and punished. And don't get me started on what should befall the owners of capital that drove this.
23
stanmancan 1 day ago 0 replies      
This has been happening for at least a decade. I had to do some updates on a system in 2008 that had this same functionality built in, and they were far from the first company to do it.
24
BenGosub 1 day ago 0 replies      
Funny that this has been posted by ambient/experimental music producer Lee Gamble https://soundcloud.com/leegamble
25
tps5 1 day ago 0 replies      
I don't see any evidence that this is a "facial recognition system."

It's likely hard to legislate against software that attempts to detect if there is a person, what their expression is, and guesses at their gender.

You could imagine that job being done by a person (just noting how many people stopped at the advertisement, and what their expression was). I don't think there's really a way to make that illegal.

I suppose I think it's something that people should be aware of, though.

26
nateberkopec 2 days ago 3 replies      
I saw a pitch for this tech 5 years ago. Not sure the name of the company. The idea is they can measure engagement (how long you looked), approximate age and sex.

Five years ago it didn't seem so sinister. A lot has happened since then, I guess.

27
bighi 1 day ago 0 replies      
I work in a big retail company here in Brazil.

If you enter most of our stores with a phone in your pocket, you're being tracked. They track where you went, in front of what shelves you stopped and for how long, if you went to the cashiers of just left...

And if we track people here in the third world, you can be sure you are being much more tracked in first world stores.

28
ddmma 19 hours ago 0 replies      
Imagine you can automatically track customer loyalty and offer them discounts. Here is an integration example using arduino also https://www.hackster.io/dasdata/dascognitiveservices-c2d991
29
Smushman 2 days ago 0 replies      
Original Reddit post with background:

https://www.reddit.com/r/norge/comments/67jox4/denne_kr%C3%A...

"...peppes pizza in Oslo S."

30
PeterStuer 1 day ago 0 replies      
There are many solutions that do this, both proprietary and Open Source. Accuracy is influenced by lots of factors, some to do with the setup (camera angle, lighting) the hardware (camera quality, computation speed) and the subjects (race, facial hair, glasses). We used this in research projects involving elderly mood assessment and in television viewer's emphatic responses. The package we used was marketed by Noldus ( http://noldus.com ) and developed bu Vicarvision ( http://vicarvision.nl ), but most of these packages perform at about the same level.
31
littlecranky67 1 day ago 0 replies      
This is nothing new; Ad Companies are actively marketing this features. See i.e. http://livedooh.com

Quote from their website:"Audience Measurement included

The information and statistics needed in order to realize audience targeting in DOOH is gathered through livedoohs integrated anonymous video analysis, which collects information about gender, age and length of view. Audience metrics are used by the ad servers decision engine to optimize advertisement delivery and increase performance."

32
beached_whale 1 day ago 0 replies      
The Island Airport(Billy Bishop) in Toronto is littered with adverts that are camera connected. Other than, maybe power management, looking at what is possible in OpenCV gives a good indication of what can, and probable is, being done. From tracking where you look to matching faces....

The problem is that this is an agency(of the government) owned facility.

33
staticelf 1 day ago 0 replies      
This is illegal in both Sweden and Norway though.
34
jlebrech 1 day ago 0 replies      
how does it detect which gender you identify as? mind reading?
35
throwaway74727 1 day ago 0 replies      
India is forcing this Orwelling nightmare upon all its citizens (under very shady circumstances).

http://www.rediff.com/news/column/the-aaadhar-effect-say-bye...

36
yalogin 1 day ago 0 replies      
It just makes too much sense to show ads based on the demographics. They now have robots in malls too. They are just recording everything, processing, logging, extracting, selling and up selling. There is no privacy. The problem is not only does it make economic sense just to have these robots, the added intelligence from the data mining makes it even more attractive.
37
Shinchy 1 day ago 0 replies      
God the comments on that tweet are enough to make me lose all faith in humanity, honestly what is wrong with people.
38
bschwindHN 1 day ago 0 replies      
I wonder if someone wore a shirt with this pattern while walking in front of it

https://thenextweb.com/tech/2017/01/04/anti-facial-recogniti...

39
sleepybrett 1 day ago 0 replies      
This has been a feature of these digital sign products for a few years, generally they aren't interested in specific faces, just if faces are seen looking at the sign and for how long. It's all just simple opencv stuff.
40
Cyph0n 2 days ago 3 replies      
I wonder how accurate these measurements are in practice. They could just be placeholder implementations, right?
42
dghughes 2 days ago 1 reply      
male young adult

I think they zeroed in on their demographic, good job!

43
PascLeRasc 1 day ago 0 replies      
From that thread I learned all you have to do to get away with evil is get the technologists to argue over "logs" vs "code".
44
infectoid 1 day ago 0 replies      
LPT(1984): Keep a roll of small round dot stickers handy. Black dots are harder to notice on the lens.
45
sly010 1 day ago 0 replies      
This is one case where a tinfoil hat might actually be effective.
46
avg_dev 2 days ago 3 replies      
Well, that's disturbing. I bet we'll see much more of this in the coming days.
47
cheetos 2 days ago 4 replies      
Is this unethical if everything is done locally and no data is stored or resold?
48
yegle 2 days ago 1 reply      
Is this new? Things like Affectiva has been out for a long time.
49
johnmarcus 1 day ago 0 replies      
hrm....so as it turns out burkas really are the new expression of freedom. whom would have guessed?
50
khasan222 1 day ago 0 replies      
Makes me wonder if this is something snapchat does whenever you look at some of their featured stories.
51
carapace 1 day ago 0 replies      
Has no one watched "Person of Interest"!? Just watch the show intro, that's all you need to know.

Ubiquitous surveillance is only going to get more, uh, ubiquitous. It's the end of privacy but also the end of crime...

52
oelmekki 1 day ago 0 replies      
I think that now, my favorite subversive street action will be to put mirrors in front of ads.
53
ddmma 2 days ago 0 replies      
Cognitive Services enabled applications, why so complicated? It's like spotted an new hipo into the wild https://azure.microsoft.com/en-us/services/cognitive-service...
54
kabes 1 day ago 0 replies      
Why are people so surprised by this? Imagine you're a company building digital signage/advertising products. Wouldn't this be one of the first ideas that pop in your head? The technology is out there for free...
55
killin_dan 1 day ago 0 replies      
Is this illegal? They have the right to process their own footage, I would think.
56
lfender6445 2 days ago 0 replies      
just so im clear, this is someone in a pizza shop looking at a windows kiosk with a camera?

it would be interesting to see the ad and how / if it changes based on who is watching

57
Freeboots 2 days ago 0 replies      
Not many smiles :(
58
cmdrfred 1 day ago 0 replies      
I think the worst thing is that system assumed their gender.
59
stefek99 1 day ago 0 replies      
Why it doesn't surprise me?
60
nom 2 days ago 1 reply      
Windows.. TeamViewer.. using the primary screen to display the ad.. and the camera is not even hidden..

Amateurs.

I wouldn't be alarmed by this, they probably don't even now the accuracy of the algorithm they are using or how to interpret the collected data correctly.

61
pfarnsworth 1 day ago 1 reply      
The subsequent Twitter thread featuring @justkelly_ok et al. is probably the worst things about Twitter all bundled up in in one. It's a pure cringefest.
62
azm1 1 day ago 3 replies      
Its absurd people are outraged about something like this, relatively harmless and at the same time use Facebook. The social network has your face, all your life, moods, expressions, interests,personal conversation etc. Now THAT is worrying and not some pizza shop which gathers stats to know what type of customer is their frequent visitor.
3
A federal court has denied a pre-trial motion to dismiss a GPL enforcement case qz.com
769 points by imanewsman  15 hours ago   192 comments top 27
1
DannyBee 14 hours ago 2 replies      
This happened a few weeks ago.But it's just a ruling on a preliminary injunction motion.

That is, it's not even a final decision of a court.

So while interesting, it's incredibly early in the process.The same court could issue a ruling going the exact opposite way after trial.

As someone else wrote, basically a court rule that a plaintiff alleged enough facts that, if those facts were true, would give rise to an enforceable contract.

IE they held that someone wrote enough crap down that if the crap is true the other guy may have a problem.

They didn't actually determine whether any of the crap is true or not.

(In a motion to dismiss, the plaintiff's allegations are all taken as true. This is essentially a motion that says "even if everything the plaintiff says is right, i should still win".If you look, this is why the court specifically mentions a bunch of the arguments the defendant makes would be more appropriate for summary judgement)

2
apo 15 hours ago 5 replies      
To use Ghostscript for free, Hancom would have to adhere to its open-source license, the GNU General Public License (GPL). The GNU GPL requires that when you use GPL-licensed software to make some other software, the resulting software also has to be open-sourced with the same license if its released to the public. That means Hancom would have to open-source its entire suite of apps.

Alternatively, Hancom could pay Artifex a licensing fee. Artifex allows developers of commercial or otherwise closed-source software to forego the strict open-source terms of the GNU GPL if theyre willing to pay for it.

This obligation has been termed "reciprocity," and it lies at the heart of many open source business models.

http://www.rosenlaw.com/pdf-files/Rosen_Ch06.pdf

The more important issue here is reciprocity, not whether an open source license should be considered to be a contract.

AFAIK, the reciprocity provision of any version of the GPL hasn't been tested in any meaningful way within the US. In particular, the specific use cases that trigger reciprocity remain cloudy at best in my mind.

Some companies claim that merely linking to a GPLed library is sufficient to trigger reciprocity. FSF published the LGPL specifically to address this point.

So I believe a ruling on reciprocity would be ground breaking.

3
rlpb 15 hours ago 3 replies      
"Corley denied the motion, and in doing so, set the precedent that licenses like the GNU GPL can be treated like legal contracts, and developers can legitimately sue when those contracts are breached."

The GNU GPL was written on the basis that if someone does not accept its terms, then that without any other license from the copyright holder, redistribution puts that person in violation of copyright law.

Suing for damages on the basis of a breach of copyright law clearly does not require any contract.

So this is more about a technicality of the legal process in this particular case, rather than anything about whether copyleft is legally enforceable or not in general.

Specifically, because the motion denial was based on the defendant's own admission being deemed to be the agreement of a contract, this says nothing about the general enforceability of the GPL (future defendants could simply avoid making such an admission).

Further, since the ruling was in response to a specific motion, it only concerns the claims made in that motion: about whether a contract exists in this particular case. It says nothing about the "copyright violation if you don't accept the license" mechanism of copyleft.

Finally, the article does not provide any evidence that there has been any ruling that determined that the GPL is an enforceable legal contract, contrary to its title. The ruling as quoted just says that the defendant, by its own admission, did accept to enter in to the GPL-defined contract.

4
beat 13 hours ago 1 reply      
A friend of mine, who is a software engineer turned IP lawyer, made a good point about the GPL - the reason it "has never been challenged in court" isn't about uncertainty, but about certainty. The GPL is based on the most simple, bedrock copyright law. Despite being a clever hack, there's nothing legally exotic about it.

Any judge in the country or anywhere else would laugh a GPL challenge right out of court. Any any IP lawyer reading it would tell their client that that's what's going to happen if they try to challenge it. That's why it's never been fully tested in court... no need.

5
ckastner 15 hours ago 3 replies      
> That happened when Hancom issued a motion to dismiss the case on the grounds that the company didnt sign anything, so the license wasnt a real contract.

... so they admitted to the court that they willfully used the software without a license to do so?

6
dhimes 15 hours ago 2 replies      
This was a ruling that the contract between the plaintiff and defendant existed, not on the validity of the contract (which is the GNU GPL license).

Defendant (Hancom) was trying to say that because they didn't sign anything they didn't have a contract.

But Hancom "represented publicly that its use of Ghostscript was licensed under the GNL GPU"

Therefore, the Judge ruled that in their own words they publicly acknowledged the contract.

7
AsyncAwait 15 hours ago 0 replies      
This is great - love or hate the GPL, it brings something unique to the table that no other license does and developers should have the ability to license their software under the terms that fits their motivation for developing it in the first place the best - the GPL does exactly that for many.
8
blauditore 14 hours ago 4 replies      
One thing I often wonder is how a company providing such open source software can find out (and proof) if someone is using it in a closed-source project. All I can think of is "guessing" based on behavior of the downstream tool.

Also, the article doesn't say much about how that lawsuit came to be. Did Artifex approach Hancom beforehand to notify them about the license infringement or just directly sue? I guess in this particular case, Hancom knew what they were doing, but I can imagine some (smaller) companies not being fully aware of open source license specifics and unknowingly running into a lawsuit.

9
pvdebbe 15 hours ago 1 reply      
Excellent news.
10
AndyMcConachie 12 hours ago 0 replies      
Here is a link to the actual opinion if anyone else is interested.

https://scholar.google.com/scholar_case?case=377952933561079...

11
dragonwriter 15 hours ago 1 reply      
Actually, a more accurate statement is thst a federal judge has ruled that a plaintiff in a case has alleged the existence of circumstances in which the GPL would be an enforceable legal contract.
12
carlmcqueen 15 hours ago 0 replies      
while an important step, the last line of the article makes it clear it is still pretty early in this process.
13
faragon 13 hours ago 1 reply      
In my opinion, software equivalent in functionality to Ghostscript should be written using a BSD or similar license. Is there anyone willing to sponsor it?
14
analog31 13 hours ago 1 reply      
That means Hancom would have to open-source its entire suite of apps.

Ask HN: What if the vendor had structured their product in a way that GhostScript is its own stand-alone app. Would they still be obligated to release their entire code, or just the portion that uses GhostScript?

15
iamNumber4 13 hours ago 0 replies      
moral of the story is, know you licences. Adhere to the license terms. Seek out projects with more permissive licenses if you plan to do closed source.

It is simple to work around licence issues with your project. You just have to put in the work. Know that your design may have to factor in extra time because you can't use lib XYZ because you have to write your own library to do the same thing. If using lib XYZ will save a bunch of time, then know that you will have to adhere to lib XYZ license. Maybe writing a wrapper application that you opensource, and your closed source application interfaces with might be a design consideration.

In the end, it's your project, your call. Just know when you make a decision you weigh the pro's and con's of going forth with that decision.

16
danschumann 13 hours ago 3 replies      
What happens if they claim they downloaded it from somewhere else that didn't include the license.txt file? There is no proof they ever were even notified of the license. (this is why we usually have people sign contracts)
17
georgestephanis 7 hours ago 0 replies      
Doesn't MySQL distribute in a similar dual-licensed fashion?
18
davidgerard 15 hours ago 0 replies      
The GPL has been upheld many times previously, e.g. in BusyBox enforcing its copyright.

https://en.wikipedia.org/wiki/BusyBox#GPL_lawsuits

In one enforcement, the defendant defaulted and the SFLC ended up with a pile of violating televisions!

http://www.groklaw.net/article.php?story=20100803132055210

The enforceability of the GPL is in no way news. That anyone would continue to try to violate it is the real WTF.

19
ljfio 14 hours ago 1 reply      
This article seems to be declaring victory in war, when really only a minor battle in the war has been won.
20
iplaw 15 hours ago 0 replies      
> Of course, whether Artifex will actually win the case its now allowed to pursue is another question altogether.

It's fairly clear that they will win the case in one fashion or another. I am predicting that the case will quickly be settled out of court for a lump sum plus a running licensing fee. You have a public admission from the defendant that they integrated the plaintiff's Ghostscript software into their own without either: 1) making the resulting Hancom office suite open source, or 2) paying Artifex a licensing fee for the software.

The case against Hancom was solid under copyright infringement, and now has the added sting of breach of contract.

21
siegel 12 hours ago 0 replies      
The article somewhat overstates the significance of this case in terms of precedential value.

On a procedural level, understand that this is a district court opinion and is not binding on any other court. Of course, if other courts find the arguments persuasive, they can adopt the reasoning. But no court has to adopt the reasoning in this opinion.

On a substantive level, it's important to look at the arguments the court is addressing and how they are addressed:

1) Did the plaintiff adequately allege a breach of contract claim?

We're at the motion to dismiss phase here and the court is only looking at plaintiff's complaint and accepting all of the allegations as true.

There are essentially only 2 arguments the court addresses: A) Was there a contract here at all?; and B) Did the plaintiff adequately allege a recognizable harm?

Understand that in a complaint for breach of contract, a plaintiff has to allege certain things: (i) the existence of a contract; (ii) plaintiff performed or was excused from performance; (iii) defendant's breach; (iv) damages. So, the court is addressing (i) and (iv), which I refer to as (A) and (B) above.

As to (A), the argument the defendant appears to have made is that an open source license is not enforceable because a lack of "mutual assent." In other words, like a EULA or shrink-wrap license, some argue that an by using software subject open source license doesn't demonstrate that you agreed to the terms of that license.

The court, without any real analysis, says that by alleging the existence of an open source license and using the source code, that is sufficient to allege the existence of a contract. The court cites as precedent that alleging the existence of a shrink-wrap license has been held as sufficient to allege the existence of a contract.

But the key word here is "allege." As the case proceeds, the defendant is free to develop evidence to show that there was no agreement between the parties as to the terms of a license. So, very little definitive was actually decided at this stage. All that was decided is that alleging that an open source license existed is not legally deficient per se to allege the existence of a contract.

As to (B), defendant apparently argued that plaintiff suffered no recognizable harm from defendant's actions. The court held that defendant deprived plaintiff of commercial license fees.

In addition, and more important for the audience here, the court held that there is a recognizable harm based on defendant's failure to comply with the open source requirements of the GPL license. Basically, the court says that there are recognizable benefits (including economic benefits) that come from the creation and distribution of public source code, wholly apart from license fees.

This is key - if the plaintiff did not have a paid commercial licensing program, it could STILL sue for breach of contract because of this second type of harm.

That being said, none of this argument is new. There is established precedent on this point.

2) Is the breach of contract claim preempted?

Copyright law in the United States is federal law. Breach of contract is state law. A plaintiff cannot use a state law claim to enforce rights duplicative of those protected by federal copyright law.

So, what the court is looking at here, is whether there is some extra right that the breach of contract claim addresses that is not provided under copyright law.

In other words, if the only thing that the breach of contract claim was addressing the right to publish or create derivative works, then it would be duplicative of the copyright claim. And, therefore, it would be preempted.

Here, the court held that there are two rights that the breach of contract claim addresses that are different from what copyright law protects: (A) the requirement to open source; and (B) compensation for "extraterritorial" infringement.

The real key here is (A), not (B). With respect to (A), the court here is saying that the GNU GPL's copyleft provisions that defendant allegedly breached are an extra right that is being enforced through the breach of contract claim that are not protected under copyright law. Therefore, the contract claim is not preempted.

(B) is a bit less significant for broader application. What (B) is saying is that because the plaintiff is suing for defendant's infringement outside the U.S. ("extraterritorial" infringement), and federal copyright law doesn't necessarily address such infringement, that's an "extra element" of the breach of contract claim. I say this is less significant because it wouldn't apply to a defendant who didn't infringe outside the United States. So, if you were the plaintiff here and the defendant was in California and only distributed the software in the U.S., argument (B) wouldn't apply.

I hope this clarifies what is/is not significant about the opinion here.

22
cmdrfred 15 hours ago 1 reply      
I wonder if this applies to non copy left licenses as well.
23
etskinner 15 hours ago 0 replies      
"GNL GPU", must be Nvidia's new line of graphics cards.
24
frabbit 12 hours ago 1 reply      
This is why if someone were the (usually) imaginary "Free Software zealot" that would like to prevent a private business from profiting off public work, it would be necessary for software not only to be under a Free license, but for the copyright assignment to be held by someone that agrees with said Free Software "zealot".
25
finid 13 hours ago 0 replies      
That happened when Hancom issued a motion to dismiss the case on the grounds that the company didnt sign anything, so the license wasnt a real contract.

Hancom's CEO is a thief.

26
MichaelMoser123 13 hours ago 0 replies      
Congratulations to Stallman. After all these years the GPL has been tested in court. The man must be drunk with joy... Three cheers for the Mr. Stallman and his gcc (joining in on his celebrations)
27
brian-armstrong 11 hours ago 0 replies      
The GPL has such strong terms, I think there is good reason to avoid ever reading any GPL codebase. Tainting yourself may imperil any code you write for the rest of your lifetime. And to that end, I think github should place a large warning on any GPL repo before letting you see it, as well as delisting them from search results (or at least hiding the contents)
4
Cyberattacks in 12 Nations Said to Use Leaked N.S.A. Hacking Tool nytimes.com
756 points by ghosh  9 hours ago   361 comments top 54
1
ComodoHacker 8 hours ago 4 replies      
Edit: Botnet stats and spread (switch to 24H to see full picture): https://intel.malwaretech.com/botnet/wcrypt

Live map: https://intel.malwaretech.com/WannaCrypt.html

Relevant MS security bulletin: https://technet.microsoft.com/en-us/library/security/ms17-01...

Edit: Analysis from Kaspersky Lab: https://securelist.com/blog/incidents/78351/wannacry-ransomw...

2
RangerScience 9 hours ago 6 replies      
> "Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets particularly hospitals had yet to update their systems."

> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."

It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?

Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?

(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)

3
turnip123942 8 hours ago 9 replies      
I think this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc.

This shows that no agency is immune from leaks and when these tools fall into the wrong hands the results are truly catastrophic.

4
mhogomchungu 9 hours ago 3 replies      
I am in Tanzania(East Africa) and my father's computer is infected.

All he did to get infected was plugging his laptop on the network at work(University of Dar Es Salaam).

The laptop is next to me and my task this night is to try to remove this thing.

5
raesene6 9 hours ago 0 replies      
One of the big problems here will be for any country which makes a lot of use of older computers using Windows XP as there is no patch for this vulnerability on that OS version.

How many systems that is, is debatable but by at least one benchmark (https://www.netmarketshare.com/operating-system-market-share...) we're looking at 7% of the desktop PC market that could be exposed with no patch available.

6
sasas 3 hours ago 0 replies      
Malware tech need recongnition! By being the first to register the hard coded domain in the malware they have slowed the spread significantly ...

https://twitter.com/josephfcox/status/863171107217563648

7
natch 8 hours ago 1 reply      
This gives the lie to the notion that a government master key or back door scheme could be protected from leaks and abuse.
8
WheelsAtLarge 2 hours ago 0 replies      
Wow, the future is here and it's not looking very good. We need to diversify our OS's in the enterprise. This time it was MSFT next it could be linux. No OS gives an absolute guarantee. The systems are relatively dumb now what will happen when AI has gotten deeper into our everyday lives. This is a wake up call.
9
f2f 5 hours ago 0 replies      
Cisco's TALOS team just published an analysis:

http://blog.talosintelligence.com/2017/05/wannacry.html

10
Asdfbla 9 hours ago 1 reply      
One of the side effect if states participate in the proliferation of offensive tools. Won't be the last time state-sponsored tools, exploits or backdoors fall into the hands of interested third parties.

I think collateral damage like that is way underrated by politicians all around the globe that call for their respective intelligence agencies to build up offensive capabilities to be able to conduct cyber warfare and whatnot.

11
jayess 8 hours ago 1 reply      
You can keep an eye on their bitcoin wallet (or at least one of them): https://blockchain.info/address/13AM4VW2dhxYgXeQepoHkHSQuy6N...
12
placeybordeaux 7 hours ago 2 replies      
Going through their wallets it looks like they've gotten 32 pay outs, some for more than 300 USD. Are there any addresses that they are using outside of the four listed int he article?

It'd be an interesting project to try and track where these funds go and where they came from.

https://blockchain.info/address/13AM4VW2dhxYgXeQepoHkHSQuy6N... - 11https://blockchain.info/address/115p7UMMngoj1pMvkpHijcRdfJNX... - 4https://blockchain.info/address/12t9YDPgwueZ9NyMgw519p7AA8is... - 6https://blockchain.info/address/1QAc9S5EmycqjzzWDc1yiWzr9jJL... - 11

13
sasas 3 hours ago 0 replies      
Here is a link to the malware sample and technical implementation details.

https://gist.github.com/rain-1/989428fa5504f378b993ee6efbc0b...

14
remarkEon 5 hours ago 0 replies      
If I want a deep technical analysis of what we know so far, where do I go?
15
Keverw 9 hours ago 5 replies      
Wow, this is so insane. I really don't think the NSA should be finding vulnerabilities and keeping them to themselves.

I mean I get it is all to help stop the bad guys, but if you are keeping cyber weapons like this. You should be required to keep them as secure and locked as possible if you don't follow responsible disclosure.

Just like how a cop would keep their weapon on them, instead of sitting it down on the table while eating lunch.

16
nyolfen 9 hours ago 2 replies      
BBC says up to 74 nations now: http://www.bbc.com/news/live/39901370
17
olliej 9 hours ago 1 reply      
Cyber attacks use patched exploit to attack systems running out of date software, even in large enterprises handling sensitive data?

I give a pass to individuals (bandwidth for updates can be expensive, regular users don't know about patch Tuesday etc), but enterprise scale deployment should have IT for this, and IT should have been well aware of this kind of thing happening.

18
nyolfen 9 hours ago 0 replies      
We really are living in the future. My condolences to the NHS, but what a time to be alive.
19
JackFr 8 hours ago 0 replies      
As far as I can see it hasn't moved the needle on Bitcoin/$ today though.

Ransom ware was a play for big Bitcoin holders to unwind large positions at the highs without too much downward pressure in Bitcoin market.

20
blackflame7000 6 hours ago 0 replies      
I was debugging a private web app today when I noticed a python script agent suddenly performing a port scan on me. it was querying for something called "a2billing/common/javascript/misc.js". After googling that phrase it seems im not the only person who has seen this today. The country of origin of the IP was Britain.

After Further investigation, it appears this attack could be in relation to this http://www.cvedetails.com/cve/CVE-2015-1875/

21
rileytg 46 minutes ago 0 replies      
is this supporting evidence of the us doing something "wrong" by creating these tools?

disclaimer: i hope no b/c it's like any other military tech being leaked and used, but am not sold either way.

22
c3534l 8 hours ago 0 replies      
It could also just be the NSA banking on everyone assuming it's someone using NSA tools.
23
nthcolumn 5 hours ago 0 replies      
24
drinchev 6 hours ago 1 reply      
So If I pay how does the hackers decrypt my HD? Is there a way to sniff the key and pay once - decrypt everywhere?
25
print_r 7 hours ago 1 reply      
While I can understand WikiLeaks position, I feel like it was incredibly short sighted and uninformed of them to release the code itself. Unless you believe that they are working with the Russian (and other?) governments to destabilize the west. Personally, I wouldn't be surprised if this was the case.
26
runesoerensen 3 hours ago 0 replies      
DHS Statement on Ongoing Ransomware Attacks: https://www.dhs.gov/news/2017/05/12/dhs-statement-ongoing-ra...
27
arca_vorago 7 hours ago 1 reply      
First of all, while I of all people love to pile onto the anti-NSA bandwagon (within constitutional reason that is, I don't advocate their abolishment, but that's a different conversation), there are quite a few non-three-letter related things that have contributed to this story and ones like it.

The primary issue at the heart of things like this, beyond the backdoors and 0-days is this: bad IT.

That being said though, bad IT is far too often the fault of upper management, and not the IT people themselves. After years of sysadmining, I've seen the inside of hundreds of companies, from fortune 500 oil to medium sized law firms. You know what they have all been doing over the years? Cutting costs by cutting IT. Exept... they completely fail to consider long term consequences, which end up costing more.

I blame things like this on two main groups. Boards of directors, and company executives. Far too often I ran into a situation where a company didn't even have a CIO or a CTO, and you had some senior one man miracle show drowning in technical debt reporting to a CEO or CFO and getting nowhere, and therefore getting no support, no budget, no personell, etc. I've seen exceptions too, but they are far too rare. If it's not technical debt that's drowning the company, it tends to be politics. The bottom line is forward thinking IT personell don't get heard, and inevitably companies hire people or an MSP with all the proprietary, cisco, microsoft, oracle, etc bullshit certs that make the C's feel better, but don't actually produce the wanted results. They inevitably end up providing an inferior product with inferior service at a short term cost just as high as doing it right the first time, and a much higher long term cost.

If I could say one thing that could help prevent issues like this, besides my standard whinging on about FOSS and the four freedoms and such, is that we need better CTO's and CIO's to advocate on behalf of IT departments, and I think senior sysadmins who feel they have hit a ceiling should consider going for their MBA's and transitioning to those titles.

Now, onto the NSA angle of the story. Well... all I can say is I told ya so, with an extra note that HN in the past few years has been surprisingly dismissive of FOSS proponents who have been warning about these things.

First they made fun of us for saying everything was being spied on, and then Snowden happened. (often followed by bullshit like "are you suprised?" or "what do you have to hide?"

Then we warned about proprietary systems, and then NSA/CIA tool leaks happened. (often followed by things like "but its for foreign collection only" and "but the NSA contributes to SElinux")

Ya'll aren't listening until after the fact, and that's not going to fix anything.

28
Irreal 6 hours ago 0 replies      
Is it possible to cause havoc on banks worldwide?
29
soneca 8 hours ago 2 replies      
"Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets particularly hospitals had yet to update their systems."

What Microsoft's software should be updated now to protect against this particular attack? Windows? Windows at the end user machines? The servers?

Could someone share a "What should I do now to protect myself" guide, please?

Thanks!

30
campuscodi 9 hours ago 0 replies      
It's not 12 nations.... it's all over the world...
31
itissid 6 hours ago 0 replies      
Does any one have a running list of the organizations effected so far?
32
cryogenspirit 7 hours ago 3 replies      
Q: does anyone know how to disable regular internet access in Windows except through a virtual machine (VMware or Virtualbox)?

I have set up my mom to use a live debian cd through VMware, but I would also like to disable networking through Windows Edge and Explorer. I don't know how to do this however.

Myself, I follow a similar scheme but using a linux virtual guest and host. Is it easy to disable networking for all networking except for apt/yum and vmware/kvm?

Lastly, does anyone know what it costs for a personal subscription to grsecurity?

33
reviewmon 3 hours ago 0 replies      
Anticiaption for an attack tied to an all time high bitcoin?
34
TomK32 8 hours ago 2 replies      
So... I'm running Linux on all my systems, how bad will it be for me?
35
gazos 5 hours ago 0 replies      
Im hearing the password wncry@20l7 decrypts the zip within the PE resources. anyone confirm?
36
djanklow 5 hours ago 0 replies      
Why don't telecom providers help remove devices who are requesting an exorbitant amount of requests? Wouldn't this kill bot nets, if the exponential growth effect became impossible?
37
jordan314 8 hours ago 3 replies      
Can't law enforcement follow the transactions of the public address of the ransom bitcoin wallet until the bitcoin is sold?
38
mschuster91 6 hours ago 0 replies      
Apparently, this has spread to Deutsche Bahn...

1) a railway dispatcher just tweeted that IT systems will be shut down (https://twitter.com/lokfuehrer_tim/status/863139642488614912)

2) a journalist tweeted that an information display of DB fell victim to ransomware (https://twitter.com/Nick_Lange_/status/863132237822394369).

I guess that #1 and #2 are related, though.

39
Myrmornis 1 hour ago 0 replies      
> Security experts described the attacks as the digital equivalent of a perfect storm.

Just in case there are any journalists reading - never use the term "perfect storm".

40
rdiddly 2 hours ago 0 replies      
"Emergency rooms were forced to divert people seeking urgent care."

I feel like the words "urgent" and "forced" might both be a bit shy of absolutely true here?

41
mdkdog 6 hours ago 2 replies      
It looks to me like common stupidity...people opening attachments that they should not be opening. No need to involve CIA NSA or other tree letters agency hacking tool...just old school phishing. I see this happening much to often....people opening *.pdf.js attachment. No need for another conspiracy theory...stupidity explains it all. Just my 50.
42
zyztem 8 hours ago 0 replies      
12 Nations that did not apply security patches
43
microcolonel 4 hours ago 0 replies      
> The attacks were reminiscent of the hack that took down dozens of websites last October, including Twitter, Spotify and PayPal, via devices connected to the internet, including printers and baby monitors.

Lazy writing at NYTimes; what on earth does this attack have to do with the one at hand? It's not broadly the same type of attack, nor the same scale, nor the same outcome.

44
gildas 7 hours ago 0 replies      
Q: could fuzzing techniques help to take down such (p2p) botnets?
45
Myrmornis 5 hours ago 2 replies      
There's no evidence that this attack targeted the NHS or other health systems, right? Just spreading randomly by email, highest infection probabilities certain older Microsoft OSs?
46
anigbrowl 6 hours ago 0 replies      
I'm surprised by the lack of speculation on the identity of the perpetrators.
47
CCing 4 hours ago 1 reply      
Is OSX affected ?
48
mtgx 6 hours ago 0 replies      
Is Russia being hit the most because it was the NSA the one that was exploiting this vulnerability before? Perhaps they are leveraging some other leaked NSA tool that gives them more direct access to Russian computers?
49
SomeStupidPoint 9 hours ago 0 replies      
This is what blowback looks like.

The US military and intelligence communities focused hard on cyber offense, rather than improving the defensive standards and technologies practiced among allies. Because of this, several allies have important systems compromised by (essentially) US-engineered malware.

Well, at least DARPA is sort of on it: http://archive.darpa.mil/cybergrandchallenge/

(There's also work stemming from the HoTT body of work on verified systems, as I understand it. But that doesn't have a sexy webpage.)

50
brilliantcode 7 hours ago 2 replies      
Isn't it peculiar that Russia remains the least hit or not even hit at all? It seems like the West was a clear target. Connecting the dots here, it's suffice to say Shadow Brokers serves Russian interests.

We are seeing bullet holes from what seem to have been cyber warfare between the former cold war foes.

51
anigbrowl 8 hours ago 0 replies      
I do not believe that attacks of this scale or coordination are undertaken by private actors. This is warfare; it just isn't kinetic yet.
52
lukaa 7 hours ago 1 reply      
Just use Linux and 90% of your problems with malware is history.Your own customization of kernel will make your even more secure.
53
jansho 9 hours ago 1 reply      
From the Guardian:

"He adds that the fear is that the ransonware cannot be broken and thus data and files infected are either lost or that the only way to get them back would be to pay the ransom, which would involve giving money to criminals."

The new terrorism.

https://www.theguardian.com/society/live/2017/may/12/england...

54
dberhane 9 hours ago 2 replies      
Maybe it is now the time for a major review of the NHS Microsoft software dependency and should seriously consider switching to Linux based software.

Here is the BBC news update about the NHS Cyber attack:

"NHS trusts 'ran outdated software'

Some who have followed the issue of NHS cyber security are sharing a report from the IT news site Silicon, which reported last December that NHS trusts had been running outdated Windows XP software.

The website says that Microsoft officially ended support for Windows XP back in April 2014, meaning it was no longer fixing vulnerabilities in the system - except for clients that paid for an extended support deal.

The UK government initially paid Microsoft 5.5 million to keep providing security support - but the website adds that this deal ended in May 2015."

5
Net neutrality is in jeopardy again mozilla.org
923 points by kungfudoi  4 days ago   372 comments top 44
1
shawnee_ 4 days ago 9 replies      
Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online. Net neutrality has been called the First Amendment of the Internet.

Not just harder. Infinitely more dangerous. Probably the scariest implications for NN being gutted are those around loss of anonymity through the Internet. ISPs who are allowed to sell users' browsing history, data packets, personal info with zero legal implications --> that anonymity suddenly comes with a price. And anything that comes with a price can be sold.

A reporter's sources must be able to be anonymous in many instances where release of information about corruption creates political instability, endangers the reporter, endangers the source, endangers the truth from being revealed. These "rollbacks" of regulations make it orders of magnitude easier for any entity in a corporation or organization to track down people who attempt to expose their illegal actions / skirting of laws. Corporations have every incentive to suppress information that hurts their stock price. Corrupt local officials governments have every incentive to suppress individuals who threaten their "job security". Corrupt PACs have every incentive to drown out that one tiny voice that speaks the truth.

A government that endorses suppression cannot promote safety, stability, or prosperity of its people.

EDIT: Yes, I am also referring to the loss of Broadband Privacy rules as they have implications in the rollback of net neutrality: https://www.theverge.com/2017/3/29/15100620/congress-fcc-isp...

Loss of broadband privacy: Yes, your data can and will be sold

Loss of net neutrality: How much of it and for how much?

2
guelo 4 days ago 1 reply      
It's insane the amount of comments on HN of all places that don't understand that the end of Net Neutrality is the end of the open web. People that didn't get a peek at Compuserve have no idea the fire we're playing with here. The open web is the most significant human achievement since the transistor and we're about to kill it happily.
3
jfaucett 4 days ago 27 replies      
Does anyone else find the internet market odd? Up until now net neutrality and other policies have given us the following:

1. Massive monopolies which essentially control 95% of all tech (google, facebook, amazon, microsoft, apple, etc)

2. An internet where every consumer assumes everything should be free.

3. An internet where there's only enough room for a handfull of players in each market globally i.e. if you have a "project-management app" there will not be a successfull one for each country much less hundreds for each country.

4. Huge barriers of entry for any new player into many of the markets (no one can even begin competing with google search for less than 20 million).

I think there's still a lot of potential to open up new markets with different policies that would make the internet a much better place for both consumers and entrepreneurs - especially the small guys. I'm just not 100% sure maintaining net-neutrality is the best way to help the little guy and bolster innovation. Anyone have any ideas how we could alleviate some of the above mentioned problems?

EDIT: another question :) If net-neutrality has absolutely nothing to do with the tech monopolies maintaining their power position then why do they all support it? [https://internetassociation.org/]

4
pbhowmic 4 days ago 3 replies      
I tried commenting on the proceeding at the FCC site but I keep getting service unavailable errors. The FCC site itself is up but conveniently we the public cannot comment on the issue.
5
bkeroack 4 days ago 9 replies      
I've written it before and I'll write it again (despite the massive downvotes from those who want to silence dissent): Title II regulation of the Internet is not the net neutrality panacea that many people think it is.

That is the same kind of heavy-handed regulation that gave us the sorry copper POTS network we are stuck with today. The free market is the solution, and must be defended against those who want European-style top-down national regulation of what has historically been the most free and vibrant area of economic growth the world has ever seen.

The reason the internet grew into what it is today during the 1990s was precisely because it was so free of regulation and governmental control. If the early attempts[1] to regulate the internet had succeeded, HN probably wouldn't exist and none of us would have jobs right now.

1. https://en.wikipedia.org/wiki/Communications_Decency_Act (just one example from memory--there were several federal attempts to censor and tax the Internet in the 1990s)

6
rosalinekarr 4 days ago 0 replies      
[The propaganda Comcast is tweeting right now is absolutely ridiculous.][1]

[1]: https://twitter.com/comcast/status/859091480895410176

7
SkyMarshal 4 days ago 1 reply      
Looking at Comcast's webpage on this:

http://corporate.comcast.com/comcast-voices/comcast-supports...

They're arguing that Title II Classification is not the same as Net Neutrality, with the following statement:

"Title II is a source of authority to impose enforceable net neutrality rules. Title II is not net neutrality. Getting rid of Title II does not mean that we are repealing net neutrality protections for American consumers.

We want to be very clear: As Brian Roberts, our Chairman and CEO stated, and as Dave Watson, President and CEO of Comcast Cable writes in his blog post today, we have and will continue to support strong, legally enforceable net neutrality protections that ensure a free and Open Internet* for our customers, with consumers able to access any and all the lawful content they want at any time. Our business practices ensure these protections for our customers and will continue to do so."*

So if Title II goes away, where do those strong, legally enforceable net neutrality protections come from? Wasn't that the reasoning behind Title II in the first place, it's the only effectively strong, legally enforceable way of protecting net neutrality (vs other methods with loopholes)?

8
kewpiedoll99 10 hours ago 0 replies      
Last Week Tonight just did a piece about it. They described the circuitous, torturous route the FCC site put into place to hinder consumer feedback. LWT announced they have purchased the domain >>> gofccyourself.com <<< to take you right to the FCC page you need to get to. Go to gofccyourself.com and comment. Long form is better. Well written is better. Well reasoned is better. But any comments are (one hopes) better than nothing. Get involved!
9
stinkytaco 4 days ago 6 replies      
Honest question, but is Net Neutrality the answer to these problems?

A few weeks ago on HN, someone made an analogy to water: someone filling their swimming pool should pay more for water than someone showering or cooking with it. This seems to make sense to me, water is a scarce resource and it should be prioritized.

Is the same true of the Internet? I absolutely agree that ISPs that are also in the entertainment business shouldn't be allowed to prioritize their own data, but that seems to me an anti-trust problem, not a net neutrality problem. I also agree that ISPs should be regulated like utilities, but even utilities are allowed to limit service to maintain their infrastructure (see: rolling blackouts).

Perhaps I simply do not understand NN and perhaps organizations haven't done a good job of explaining it, but I don't know that these problems are not best solved by the FTC, not the FCC.

10
wehadfun 4 days ago 1 reply      
Trumps appointees disappoint me a lot. This guy and the one over the EPA
11
vog 4 days ago 1 reply      
It is really a pity that in the US, net neutrality was never established by law, but "just" on institutional level.

Here in the EU, things are much slower and the activists were somewhat envious how fast net neutrality was established in the US, while in the EU this is a really slow legislation process. But now it seems the this slower way is at least more sustainable. We still don't have real net neutrality in the EU, but the achievements we have so far are more durable, and can't be overthrown that quickly.

12
Sami_Lehtinen 4 days ago 0 replies      
My Internet connection contract already says that they reserve right to: Queue, Prioritize and Throttle traffic. This is used to optimize traffic. - Doesn't sound too neutral to me? It's also clearly stated that some traffic on the network get absolute priority over secondary classes.

Interestingly at one point 100 Mbit/s connection wasn't nearly fast enough to play almost any content from YouTube. - Maybe there's some kind of relation, maybe not.

13
alexanderdmitri 4 days ago 0 replies      
I think a great thing to do (if you are for net neutrality), is pick specific parts of the NPRM filed with this proceeding and comment directly on it[1] to help do some work for the defense. I feel sorry for anyone who might actually need to address this document point for point to defend net neutrality.

I tried my hand at the general claim of regulatory uncertainty hurting business, then Paragraphs 45 and 47:

-> It is worth noting that by bringing this into the spotlight again the NPRM is guilty of iginiting the same regulatory uncertainty it repeatedly claims has hurt its investments.

-> Paragraph 45 devotes 124 words (94% of the paragraph), gives 3 sources (75% of the references in this paragraph) and a number of figures (100% of explicitly hand-picked data) making the claim Title II regulation has suppressed investment. It then ends with 8 words and 1 reference vaguely stating "Other interested parties have come to different conclusions." Given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

-> There are also a number of extremely misleading and insubstantiated arguments throughout. Reference 114 in Paragraph 47, for example, is actually a hapzard mishmash of 3 references with clearly hand-picked data from somewhat disjointed sources and analyses. Then the next two references [115, 116] in the same paragraph, point to letters sent to the FCC over 2 years ago from small ISP providers before regulations were classified as Title II. Despite discussing the fears raised in these letters, the NRPM provides little data on whether these fears were actually borne out. In fact, one of the providers explicitly mentioned in reference 115, Cedar Falls Utilities, have not in any way been subject to these regulations (they have less than 100,000 customers ... in fact the population of Cedar Falls isn't even 1/2 of the 100,000 customer exemption the FCC has provided!). This is obviously faked concern for the small ISP businesses and again, given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

[1] makes a great point on specifically addressing what's being brought up in the NPRM:https://techcrunch.com/2017/04/27/how-to-comment-on-the-fccs...

14
smsm42 2 days ago 0 replies      
That article makes little sense to me. For example:

> Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Big companies are censoring your voice right now! Facebook, Twitter, Youtube and literally every other big provider is censoring online speech all the time. If it's so scary, why nobody cares? If it's not, what Mozilla is trying to say here?

> Net neutrality is fundamental to competition. Without net neutrality, big Internet service providers can choose which services and content load quickly, and which move at a glacial pace.

Internet has been around for a while, and nothing like that happened, even though we didn't have current regulations in place until 2015, e.g. last two years. At which point we start asking for evidence and not just "they might do something evil"? Yes, there were shenanigans, and they were handled, way before 2015 regulations were in place.

> Net neutrality is fundamental to innovation

Again, innovation has been going on for decades without current regulations. What happened that suddenly it started requiring them?

> Net neutrality is fundamental to user choice. Without net neutrality, ISPs could decide youve watched too many cat videos in one day,

ISPs never did it, as far as we know, for all history of ISP existence. Why would they suddenly start now - because they want to get abandoned by users and fined by regulators (which did fine ISPs way before 2015)?

> In 2015, nearly four million people urged the FCC to protect the health of the Internet

Before 2015, the Internet was doing fine for decades. What happened between 2015 and 2017 that now we desperately need this regulation and couldn't survive without it like we did until 2015?

15
MichaelBurge 4 days ago 1 reply      
> The order meant individuals were free to say, watch and make what they want online, without meddling or interference from Internet service providers.

> Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Hmm, was it ever prohibited for e.g. some Twitter user to write your ISP an angry letter calling you a Nazi, so they shut your internet off to avoid the headache?

I've only heard about "net neutrality" in the context of bandwidth pricing. It's very different if companies are legally required to sell you internet(except maybe short of an actual crime).

16
sleepingeights 4 days ago 1 reply      
Many of these articles are missing an easily exploitable position. The key term is "bandwidth" which is the resource at stake. What is being fought over is how to define this "bandwidth" in a way that will be enforceable against the citizen and favorable to the corporation (i.e. "government").

One way they could do this is to divide it like they did the radio spectrum by way of frequency, where frequency is related to "bandwidth". The higher the frequency, the greater the bandwidth. With communication advances, the frequencies can be grouped just like they did with radio, where certain "frequencies" are reserved by the government/military, and others are monopolized by the corporations, and a tiny sliver is provided as a "public" service.

This way would be the most easily enforceable for them to attack NN and the first amendment, as it already exists by form of radio.

* It is already being applied by cable providers through "downstream/upstream" where your participation by "uploading" of your content is viewed inferior to your consumption of it. i.e. Your contribution (or upload) is a tiny fraction of your consumption (or download).

* Also, AWS, Google and other cloud services charge your VPS for "providing" content (egress) and charge you nothing for consuming (ingress). On that scale, the value of what you provide is so miniscule it is almost non-existent to the value of what you consume.

tldr; NN is already partly destroyed.

17
laughingman2 4 days ago 2 replies      
I never thought hackernews would have so many opposing net neutrality. Is some alt-right brigading this thread? which is ironic considering how they claim to care about free speech.
18
FullMtlAlcoholc 4 days ago 1 reply      
Why does anyonw want to give more power to Comcast or AT&T? Neither has hardly ever been described as innovative.. unless you count clueless members of Congress.
19
akhilcacharya 4 days ago 0 replies      
It's interesting how much could have changed if ~175k or fewer people in the Great Lakes region had voted differently..
20
justforFranz 4 days ago 0 replies      
Maybe we should give up and just let global capitalists own and run everything and jam web cameras up everyone's asshole.
21
notadoc 4 days ago 0 replies      
Is the internet a utility? And are internet providers a utility service? That's really what this is about.
22
pc2g4d 4 days ago 0 replies      
The top comments here seem to misunderstand net neutrality. It's not about companies selling your browsing history---that was recently approved by Congress in a separate bill[1]---but rather is about whether ISPs can prioritize the data of different sites or apps. IIUC net neutrality doesn't really provide any privacy protections, though it's likely good for privacy by making a more competitive market that motivates companies to act more (though not always) in consumers' interests.

1: https://arstechnica.com/information-technology/2017/03/how-i...

23
mbroshi 4 days ago 2 replies      
I am agnostic on net neutrality (ie. neither for nor against, just admitting my own lack of ability to assess its fallout).

I read a lot of sweeping, but hard to measure claims on its affects (such as in the linked article). Are there any concrete, measurable affects that anyone is willing to predict?

Examples might be:

* Average load times for the 1000 most popular webpages will decrease.

* There will be fewer internet startups over the next 5 years than the previous.

Edit: formatting

24
em3rgent0rdr 4 days ago 0 replies      
Title II "Net Neutrality" is a dangerous power grab -- a solution in search of a problem that doesn't exist, with the potential to become an engine of censorship (requiring ISPs to non-preferentially deliver "legal content" invites the FCC and other regulatory and legislative bodies to define some content as "illegal").

Title II "Net Neutrality" is also an instance of regulatory capture through which large consumers of bandwidth (such as Google and Netflix) hope to externalize the costs of network expansions to accommodate their ever-growing bandwidth demands. To put it differently, instead of building those costs into the prices their customers pay, they want to force Internet users who AREN'T their customers to subsidize their bandwidth demands.

25
tycho01 4 days ago 1 reply      
I'm curious: to what extent could a US ruling on this affect the rest of the world?
26
kristopolous 4 days ago 0 replies      
Freedom is never won, only temporarily secured.
27
boona 4 days ago 1 reply      
If the internet is fundamental to free speech, maybe it's not a good idea to give it's freedom over to state control, and in particular to an agency who historically has gone beyond it's original mandate and censored content.

When you hand over control to the government, don't ask yourself what it would look like if you were creating the laws, ask yourself what it'll look like when self-interested politicians create them.

28
billfor 4 days ago 0 replies      
I'm not sure putting the internet into the same class of service as a telephone made sense for all the unintended consequences. Everyone is fine until they wind up paying $50/month for their internet and then seeing another $15 in government fees added to their bill. From a pragmatic point of view, I'm sure the government will always have the option to regulate it later on.
29
WmyEE0UsWAwC2i 4 days ago 0 replies      
Net neutrality should be on the constitution. Safe from lobbists and the politician in office.
30
arca_vorago 4 days ago 0 replies      
Someone tell me again why we don't have public internet backbone like we do roads?
31
twsted 4 days ago 0 replies      
It's sad that this article stayed at the first positions for so little time. And we are on HN.

But is this HN folks fault?

At the time of my writing "Kubernetes clusters for the hobbyist" - who thinks it is as important as this one? - with 470 points less, almost 300 comments less, both posted 6/7 hours ago is six positions above.

32
weberc2 4 days ago 10 replies      
I wasn't impressed with this article; it reads like fear mongering. More importantly, I don't think the fix is regulation, I think it's better privacy tech + increased competition via elimination of local monopolies. Do we really want to depend on government to enforce privacy on the Internet?
33
c8g 4 days ago 1 reply      
> Net neutrality is fundamental to competition.

so, I won't get 20 times faster youtube. fuck that net neutrality.

34
1ba9115454 4 days ago 12 replies      
How much of this can just be fixed by the free market?

If I feel an ISP is limiting my choices wouldn't I just switch?

35
M2Ys4U 4 days ago 0 replies      
(In the US)
36
4 days ago 4 days ago 3 replies      
37
jameslk 4 days ago 4 replies      
I don't understand why you're getting downvoted. This is a valid question and shouldn't be downvoted so others can learn from the discussion.
38
jwilk 4 days ago 2 replies      
If you want people to take your article seriously, then maybe you shouldn't put a pointless animated GIF in it.
39
marcgcombi 4 days ago 1 reply      
40
grandalf 4 days ago 2 replies      
41
JustAnotherPat 4 days ago 0 replies      
November 8th was critical to the Internet's future, not today. People made their bed when they refused to get behind Clinton. Now you must accept the consequences.
42
albertTJames 4 days ago 0 replies      
Neutrality does not mean anything should be authorized... international law should allow ISP to submit to judiciary surveillance of individuals if those a suspected of serious crimes, terrorism, pedophilia, black hat hacking, psychological operations/fake news. I don't think because policemen can stop me in the street it is a violation of my freedom.Moreover the article is extremely vague and use argumentum ad populum to push its case while remaining quite unclear on what is really planned: "His goal is clear: to overturn the 2015 order and create an Internet thats more centralized."
43
bjt2n3904 4 days ago 2 replies      
So, which is it HackerNews? Are we OK with companies deciding what gets on the internet, or are we not? On one hand, we laud Facebook et al. for suppressing "fake news", and then we get upset when ISPs do the same.

Furthermore, the FCC has historically engaged in content regulation. Anyone wonder why there's no more cartoons on broadcast television? Or perhaps why the FCC is investigating Colbert's Trump Jokes? If we're so concerned about content freedom, the FCC is not the organization to trust.

44
vivekd 4 days ago 2 replies      
>The internet is not broken, There is no problem for the government to solve. - FCC Commissioner Ajit Pai

This is sooo true. If internet carriers were preferring some kind of content, or censoring or giving less bandwidth to certain content, or charging for certain content - and this was causing the problems described in the mozilla article - then yes - we could have legislation to solve that problem.

What gets to me about the net neutrality movement is that the legislation they are pushing for is based on vague fears and panic. Caring about net neutrality has become some sort of weird silicon valley techno-virtue signaling.

If ISPs start behaving badly or restricting free speech, I would be happily on board to having legislation to address that. This has not happened and there is no evidence that there is any imminent threat of this happening. Net neutrality legislation is a solution to a vague non-existent speculative problem.

6
Solar Roof tesla.com
683 points by runesoerensen  2 days ago   437 comments top 59
1
blakesterz 2 days ago 17 replies      
Yikes. I just signed a contract for a new roof here last month, it's going to cost about $12k. Just did the estimate for the Tesla Solar roof... $80,300, so $87k if I want the battery too. I can barely afford the $12 right now, the $80 is just so far over it's not even close, even with how much I save over the years in electricity.

That being said, I love these things, so hoping it gets cheaper in the coming years.

2
mrtron 2 days ago 2 replies      
Not sure why all the negative energy.

They are going after the portion of the market that would replace their roof with a high end material, and are interested in solar.

If you are a home owner in this situation, you could consider investing into your home. The roof will pay dividends over the next 30 years, and is attractive and durable.

I think it will do extremely well. Perhaps the best opportunity is in new construction. Imagine having 50k more baked into your mortgage, but having your roof lower your ongoing energy costs! Great potential in that market, could also optimize the roof designs for power generation.

3
IvanK_net 2 days ago 15 replies      
I have always been a huge fan of a quick transition to sustainable energy sources. There is just one little thing I don't understand.

Why they expect people to make electricity at their homes? You can buy a little piece of land in a dessert, put solar panels there and distribute the electricity to other places. And you don't have to climb on any roof during the installation or the maintenance.

It is not profitable today in a free market to bake your own bread or to plant your own vegetables. Because if it is done in a large scale by professionals, it can be made much cheaper while keeping the good quality. So I don't understand, how the home-made electricity could economically compete with the professional energy farms of the future.

4
SirLJ 2 days ago 4 replies      
Tesla acquired SolarCity in November in a deal worth $2.1 billion.

At the event, Musk said Tesla's roof would price competitively with normal roofs and could even cost less.

"It's looking quite promising that a solar roof will actually cost less than a normal roof before you even take the value of electricity into account," Musk said at the event. "So the basic proposition would be: Would you like a roof that looks better than a normal roof, lasts twice as long, costs less, and, by the way, generates electricity? It's like, why would you get anything else?"

5
11thEarlOfMar 2 days ago 3 replies      
In modeling whether this makes sense, I looked at my annual electricity bill, which comes in at about $1,800/year. That's not enough savings opportunity to justify a ~$70,000 roof+batteries.

However, when I add 2 electric cars, the savings nearly triple [0]. Instead of buying gasoline, I'll be paying for electricity.

At $5,400/year, spending $70,000 starts to make some sense.

On the other hand, if I put up ugly panels and still use the Tesla batteries, aren't I going to save a lot more?

[0] 24,000 miles/year, 225 miles @ $10 per charge, vs. 25 mpg @$3/gal

EDIT: Corrected KWh charge... $10 is cost for one charge.

6
quizme2000 2 days ago 1 reply      
I think Elon got ripped off on his last shingle roof. The bar chart is nice but off by at least 150%. I've had many roofing subcontractors as clients past and present in Northern California. Based on an average of 870 roofs in 2016 for Single Family Residential homes in the bay area, Asphalt shingle roofs are $3.12 per square foot for materials and labor. The highest was $5.75 psf and the lowest $2.35 psf. Note that the SF bay area is considered one of the most expensive in roofing market. Also note that Solar City has a poor reputation in the industry for hard selling larger than needed residential solar systems.
7
fernly 2 days ago 2 replies      
For a counterweight let me present this interview[0] with the CEO of "the largest privately held solar contracting company in America", near the end of which he says several disparaging things about Tesla's roof, including,

> When I saw the demo he did at Universal Studios... What I saw was a piece of glass that looked like it had a cell in it. The challenges hes going to have is, how are you going to wire it? Every one of those shingles has to be wired.

> Roofs have valleys and they have hips and they have pipes. How are you going to work around that? How are you going to cut that glass? Are you going to cut right through the cell?

The latter question is perhaps answered by the posted article, "Solar Roof uses two types of tilessolar and non-solar." So Petersen's question is moot, the glass/solar tiles don't have to be cut to fit in a hip or around a flue, that will be done to the non-solar tiles that look the same.

The question of wiring is open: imagine the grid of wires that have to underly that roof, and getting them all put down without a break or a short, by big guys with nail guns (if you've ever watched roofers at work -- it isn't a precision operation).

Then Petersen goes on to say,

> So I would say for the record ... itll be cost-prohibitive. ... For $55,000 I can give you a brand-new roof that will last forever 50 years and I can give you all the solar you can handle. ... (Musks) product is going to be north of $100,000.

The graph in the posted article does not directly address total up-front installed cost, but rather tries to combine cost with some anticipated lifetime energy return -- a procedure with a LOT of variables and assumptions. I would like to see real numbers for a Tesla roof, $/sq.yd installed.

[0] http://www.mercurynews.com/2017/05/04/from-summer-job-to-sol...

8
fpgaminer 2 days ago 3 replies      
I'm so absolutely excited for solar power. Tesla's Solar Roof, their PowerWall batteries, electric cars. It's all just painting such a bright future. Certainly Tesla has no monopoly on it, but they've made it sexy and are pushing the bleeding edge forward. Props to them.

We recently signed a contract to do an installation on our house (with a local contractor, not SolarCity). It can't happen soon enough! We'll have enough panels and batteries to be 100% off-grid throughout the entire year, plus get a good chunk of change back from the Net Metering every year. Pay off is only 8 years!

That installation is enough to cover our normal electric usage. Longer term I want to replace our gas appliances with electric and replace the car with a Tesla. Then we can double our solar installation to keep pace and BAM we will be 100% clean energy and off-grid. All while saving a bucket of money.

The thought of running off grid in the middle of a Southern California suburb? People might think me crazy, but guess what? At least we're doing our small part to save the planet, and saving money doing it. So who's the crazy one?

9
zensavona 2 days ago 0 replies      
I understand where these people who are saying it sucks and it's too expensive are coming from. It is more expensive than normal solar panels.

BUT! How many wealthy people have beautiful houses that don't have solar panels? Why do you think that is?

Tesla has this cool factor that didn't exist for environmentally friendly things before. How many super rich people drove electric cars or hybrids before? Now Teslas are one of the cool things to have.

They are absolutely targeting a different segment of the population, but I think overall it's a very positive thing and it'll probably work.

10
SwellJoe 2 days ago 2 replies      
I'm super excited about all of the great stuff happening in solar recently, but whenever I read about the economics of home solar, I'm also always reminded of how stacked the deck is for wealthy people vs. poor folks. There's a very large federal tax credit for solar investment. That's great...but, people who can't afford their own home get no such credit, and there's no way for them to get such a credit. That's a super common trait for lot of incentives; they go to people who need them least. And, the people who are getting these incentives, are also using a lot more power (bigger houses, more power), and so even with solar, their huge houses may still be contributing more to emissions than the poor folks who aren't getting any tax breaks living in apartments or rental properties.

I don't really have any answers on this, I just think it doesn't get talked about enough.

11
kartickv 1 day ago 0 replies      
Would it be possible to decouple the roof from the solar? In other words, I'd buy this roof for the cost of a normal roof, and an investor would pay for the cost of the solar panels, and they'd own the electricity that comes from it. The panels would provide me a power backup (in the event of a power cut), for which I'd pay at the same rate as I pay the grid.

This would be helpful for houseowners who want to be green, or want backup in the event of a power cut (a regular event in India), but don't want to or can't invest so much in a roof. They get power backup without making an upfront investment.

Investors would benefit from getting a free site to install their panels on.

12
palakchokshi 2 days ago 5 replies      
2 self driving Teslas in the garage (making money when not used) $150,000

1 power wall battery pack $7,000

1 Solar Roof $80,000

Subtract

$15,000 in Federal tax credits for both cars

$5,000 in California tax credits for both cars

30% of 80,000 = $24,000 Solar Investment tax credit

$237,000 - $44,000

Grand Total $193,000

Calculate savings

$240 per month in gas

$100-$300 per month in electricity

$1000 - $2000 earned by the cars while not used by owner (10 years into the future)

$1340 to $2540 per month

$193,000/1340 = 144 months = 12 years to recover costs

$193,000/2540 = 76 months = 6.33 years to recover costs

Take away the income from the cars

$193,000/540 = 357 months = 30 years to recover costs

If PGE gives you money for putting excess electricity into the grid then you can recover costs faster.

13
nsxwolf 2 days ago 3 replies      
Oh wow. So that's disappointing. I was under the impression it was about the same cost as a new roof. I guess its starting at that cost, if you want just a tiny little bit of electricity.
14
foxylad 2 days ago 1 reply      
Does anyone know how the electrical connection works?

It seems to me that this is critical. If connections fail in a really hostile environment (high thermal range and moisture levels) then maintenance will kill any savings.

But if they've solved this problem, (and perhaps have an efficient way to replace tiles without removing the ones above), then I'd guess they will be wildly successful.

I once visited my brother who was having a new slate roof installed. While inspecting it, he saw a cracked slate on the bottom row. He insisted it be replaced, which meant removing an ever-increasing triangle of tiles above it, until you reached the ridge. The contractor did not have a good day.

15
Matt3o12_ 2 days ago 4 replies      
Does anyone understand how the warranty works? From their solar panel page[0], it says that there is a 30 year warranty for Power and Weatherization and a lifetime warranty.

So, what does the lifetime cover? The only thing that can go wrong is that either the power module fails or the tile is damaged due to weather, which is both covered by the 30 year warranty.

Nevertheless, a 30 year warranty is still pretty impressive and even more so if it covers normal wear and tear from weather.

[0]: https://www.tesla.com/solarroof?redirect=no

16
DLarsen 2 days ago 0 replies      
"...the glass itself will come with a warranty for the lifetime of your house, or infinity, whichever comes first."

Cute.

17
Arcsech 2 days ago 4 replies      
I'm curious about the durability - I live Colorado Springs, which is typically very sunny (good for solar), but can get pretty bad hailstorms. This means that the average roof lifetime here is much shorter than elsewhere. If the Tesla's roof tiles are actually significantly more durable than asphalt, it could be more cost-effective here than elsewhere.
18
myrandomcomment 2 days ago 0 replies      
I need to replace my roof this year / next year. Cost ~15-20k for normal roofing, up to 50k for metal. I want solar on top of that and backup power. Just put my money down for this. Cost is under the 50k I was thinking about just for the metal roof!

Time will tell when they come out and do the survey to see how correct it is but I am excited.

19
kristinafoley 13 hours ago 0 replies      
The bigger problem with this is that they are being made at 2009 electrical code specifications, and we're at 2017. Will they really be available anywhere outside of California?

The calculator will not go over 50%. So then what? Still pay for their tiles for the whole roof?

And what if 4 in the middle of my roof are bad? Or 10? or 4 this week and 10 next week? How many times do I have to have them out?

What about ALL the wires? Where do those all go? Each tile will have wires needing to connect to an inverter and the Powerwall? Electricians are already in high demand, how long will I have to wait for them to do this? Electricians are EXPENSIVE, do I have to pay for each hour they are repairing this system??

Powerwalls have a 10 year warranty. There is no cost for replacement included in their 30 year projections.

Is anyone believing this administration will continue the tax credit, on solar??

Solar shingles have been done. Every company has already discontinued them. CGI works in advertising, but doesn't work in reality.

20
qaq 2 days ago 0 replies      
To all upset about pricing there are products targeting different income brackets. People in blah also can't understand how we spend half of their monthly income on some organic blah drink. Just because it does not make sense for your particular situation does not mean there is no market.
21
beezle 1 day ago 1 reply      
A bit late on this, but have to wonder what fire departments think about this roof/panel? How difficult will it be to vent a roof? Will the roof present an electrical risk to firemen (even assuming there is a cut off below)

Unfortunately, I don't think a lot of municipalities have given too much thought to widespread solar use and I wonder if this will fall afoul of possible future regs.

22
bootload 2 days ago 0 replies      
"Solar Roof uses two types of tilessolar and non-solar. Looking at the roof from street level, the tiles look the same. Customers can select how many solar tiles they need based on their homes electricity consumption."

Game changer for suburban housing. This will accelerate the decentralisation of power generation making it less likely power failure will occur. Now for housing regulations at state and municipal level to mandate solar tiles in construction.

23
altano 2 days ago 0 replies      
"Your Solar Roof can generate $123,900 of energy over 30 years."

Why doesn't the calculator tell me the estimated kWh production instead of a dollar figure that means nothing to me?

24
ed_balls 2 days ago 1 reply      
Apart from people buying these for status there is another market - Hawaii and other remote places where prices are from 2x to 5x comparing to California.
25
awqrre 2 days ago 1 reply      
At my house, it would take more then 20 years to use up $53,500 worth of electricity assuming that the panels would be able to generate all the electricity that I need (and it probably would not be able to because my roof is not in the perfect angle). I probably will have to stick to a conventional roof.
26
dynofuz 2 days ago 1 reply      
The hail ball test is deceptive. The tesla tile is held with more support since its horizontal. the max distance to any corner support is maybe 2-3 inches. the other natural tiles are vertical, and therefore have 4-5 inches to the farthest supported corners. It may still work, but we cant tell from that video.
27
dustinmoorenet 2 days ago 1 reply      
With Space X internet on the horizon, I need to start designing my house in the country, preferably with a small roof.
28
accountyaccount 2 days ago 0 replies      
Looks like these roofs take about 30 years to pay for themselves?
29
pfarnsworth 2 days ago 0 replies      
I am currently waiting to have my roof redone in the next couple of weeks It's going to cost $20k. I went through the calculator and it said that my roof would be about $30k after rebates, with no battery. That, to be honest, is something I wish I had known before I signed the contract to get my roof done. I don't, however, use much electricity. I use about $70/month max for my entire house, so I would literally have to convert everything over to electricity in order for this to be more worthwhile. But at this point, there's no incentive for me to ever get the solar roof unfortunately, having JUST dumped $20k into my shingle roof.
30
brianbreslin 2 days ago 1 reply      
Any idea on whether or not this would improve your home's resale value? Also it won't let you go a full 100% (max 70%) of your roof coverage. Do they fill the rest with regular tiles?

The average person would have to finance this. So what's the true cost?

31
bikamonki 2 days ago 0 replies      
Sit down and dig this: in my country the State owns sunshine. Yep. They even made sure it was included in the last Constitution. So, if this tech ever becomes cheap enough for the masses, government will be ready to tax it.
32
gumby 2 days ago 0 replies      
FYI LCoE calculation on Si PV assume a reduction of at least 50% of generating capacity within 20 years. This page just claims "30 years" which is outside the expected lifetime of any cells on the market today.
33
OrthoMetaPara 2 days ago 1 reply      
This is the dumbest thing ever. If you live in a city or a suburb, you don't need one of these things because you'll be connected to a grid that can give you electricity that is far more efficiently generated. If you live in a rural area with a lot of sun, then you can just put solar panels on the ground where they're not a bitch to clean.

I'm not against using solar electricity because it can be made affordable but this idea is equivalent to the backyard blast furnaces in Maoist China. It's a waste of time and only useful for status signaling to your eco-chic friends.

34
jorblumesea 2 days ago 0 replies      
While it's far better than other competitors, my asphalt roof was 7k, 30 year warranty rated to 120mph winds. Still a long way to go on the pricing part.

Super happy this is even a thing, 10 years ago this would have seemed like science fiction.

35
puranjay 1 day ago 1 reply      
This is one of my pet peeves with HN.

People here often gang up on a product if it doesn't meet their specific requirements or budget.

Could it simply be that you aren't the target market for this product?

Also, people comparing the cost of this roof with the cost of a conventional roof are being a little ingenuous. This roof has ROOF + SOLAR.

You'll have to include cost of an equivalent solar system + roof if you want to do a cost to cost comparison.

It's obviously not cheap, but if there are people - and there are plenty of people - who can afford it, why knock it down?

36
rurabe 2 days ago 1 reply      
Best part

"Tile warranty: Infinity, or the lifetime of your house, whichever comes first"

37
nextlevelshit 1 day ago 0 replies      
Has anybody information about the energy cost for production of one tile and the time it takes to generate this energy by the solar cell.

Just the fact that it is a solar roof doesn't mean it is environmentally friendly!

38
mrfusion 2 days ago 1 reply      
how these shingles connect to each other? And how are they affixed to the roof? You can't just nail them right?
39
MR4D 2 days ago 0 replies      
Tesla has an error somewhere. I checked their website calculator vs their source of Google project sunroof, and Tesla thinks my roof is 5 times bigger, with an electric bill nearly twice as high!

Doing some quick math, I can confirm that Google's number are reasonably close on both, while Tesla's are just plain wrong.

The result is a Tesla roof that would cost roughly $170,000. Worse, that's about half the value of my house!

I know - early days - but, wow, surprised by the estimate!

40
tankerdude 2 days ago 0 replies      
Hrmmm... I just signed a contract a few weeks ago with SunPower solar panels. Over 40K for almost 8kWh of electricity. It's a hefty cost but it's ready now.

It's a nearly flush mount so I'm ok with it and it's still a traditional look in the front where it's concrete tile that would last longer than I will live.

Lastly though, I wonder how they deal with valleys and different roof pitches. It would look a little odd unless it is non functional.

41
ctdonath 2 days ago 2 replies      
People keep overlooking the objective value of not relying on "grid" power sources. Power goes off, your system keeps going. Gasoline supply stops (I've seen that a few times), you can just power your car at home. Your system fails, grid is likely still up to cover.

Supply-and-demand takes a sharp turn when supply is actually limited and can/does run out. At that point, having pre-paid for your own uninterrupted off-grid supply is worth a whole lot more.

42
canterburry 2 days ago 0 replies      
Yeah, I feel the coverage of this has been very deceptive. We just got a quote for our roof in SF (small house) and it was ~$20K with an upgraded architectural tile, new spouts and gutters.

Tesla would charge me $67K for the roof alone based on roof size and our energy use.

1. "Unlimited warranty" doesn't actually mean unlimited warranty when your roof starts leaking...just that the tile won't break.

2. Why the heck should I pre-pay Tesla for unrealized savings to my future energy use??

43
woodandsteel 2 days ago 0 replies      
Interesting the house in the picture has a chimney and a highly slanted roof. It looks like it is in the north with lots of snow and relatively little sunlight
44
brianbreslin 1 day ago 0 replies      
Has anyone here looked into PACE financing (https://ygreneworks.com/faqs/ ) for projects like this? what is the interest rate or cost?
45
myroon5 2 days ago 1 reply      
I'm surprised they didn't team up more with Google's Project Sunroof or Zillow or create their own version of those projects, so that you could just put in your home address and get all the relevant details. Had to check Zillow to find out my own square footage.
46
waynecochran 2 days ago 0 replies      
Google's solar saving estimator is pretty cool -- I punch in my house address and it examines my roof line and notes how southernly each part of your roof lies.

Unfortunately, it says my cost would go up $53 a month! No point in getting solar if you live in Portland.

47
lostgame 2 days ago 0 replies      
I see a lot of these personal level-vs-global level discussions here but ultimately not enough posts celebrating the fact that we're looking at both, here, and looking at a potentially much better future because of it.

Go, Tesla.

48
yeukhon 2 days ago 0 replies      
> Installations will start in June, beginning with California

Would be interested in NYC.

So a couple questions for those already done this:

* any tax benefit / government subsidies I should be aware of?

* starting small, recommendation? pricing?

49
batushka 1 day ago 0 replies      
I hope only rich people will be separated from their money and no loans will be given for this.
50
amelius 2 days ago 1 reply      
Why don't banks (or even Tesla for that matter) finance such roofs upfront? It seems they can make money out of this.
51
dharma1 2 days ago 1 reply      
UK availability?
52
dEnigma 2 days ago 0 replies      
"In doing our own research on the roofing industry, it became clear that roofing costs vary widely, and that buying a roof is often a worse experience than buying a car through a dealership."

Seems like someone just couldn't resist putting that little jab in there.

53
EGreg 2 days ago 0 replies      
I love their graph, with the solar roof being the only "negative-cost" roof. Really drives the (sales) point home.
54
abc_lisper 2 days ago 1 reply      
Where is the finance option?
55
owenversteeg 2 days ago 3 replies      
Holy shit, these comments.

"My cars will make $24,000 per year" (in a comment justifying spending a quarter million dollars on Tesla products)

"People don't understand how we can spend half their monthly income on one organic drink" (in a completely unrelated comment)

"if I buy two Teslas my savings triple!" (another person justifying Tesla's marketing with some creative math)

Sounds like something you'd hear from protesters mocking the 1% but no just another day here on HN.

I've been here (in various incarnations) long enough to say this, so could we try to be just a little bit more self aware? 24k/yr is nearly double the minimum wage. A quarter million dollars is a truly immense amount of money. And buying two cars is a dream for most of America, ignoring the fact that those are two Teslas, which are roughly $70k cars (and no, you can't currently buy any $35k Teslas no matter what Musk's Twitter says.)

56
gigatexal 2 days ago 0 replies      
The infinity warranty is a draw for me.
57
stratigos 2 days ago 13 replies      
People are thinking way too much about how much this saves them at a personal level.

I think people should instead be thinking about how we can save the existence of the entire species, and all other higher order forms of life on earth, rather than focusing on their individual tax breaks, savings, or other trivial concerns. Yes, your cash flow is rendered quite trivial if life on Earth ends.

Invest in the Life Economy, and turn your back on the Death Economy. The value here is in the benefit to life, concern over state monopolized currencies clearly facilitates an economy of death.

58
awkwardtortoise 2 days ago 2 replies      
59
SirLJ 2 days ago 0 replies      
Way to expensive, as always, too much hype and then nothing...
7
CockroachDB 1.0 cockroachlabs.com
803 points by hepha1979  2 days ago   344 comments top 49
1
dis-sys 2 days ago 3 replies      
I really like the fact that the CockroachDB team recently did a detailed Jepsen test with Aphyr. The follow up articles from both CockroachDB and Aphyr explaining the findings are very interesting to read. For those who might be interested -

https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-j...

https://jepsen.io/analyses/cockroachdb-beta-20160829

2
rantanplan 2 days ago 0 replies      
In an era where hot air and hip DB technologies prevail, I'd like to emphasize the fact that the CockroachDB engineers are consistently honest and down to earth, in all relevant HN posts.

This builds up my confidence in their tech, so much so that even though I had no real reason to try this new DB, I'm gonna find one! :D

3
Svenskunganka 2 days ago 3 replies      
Pardon the nature of my question, but I'm really interested in what your experience has been so far building a database with Go? Has its runtime (the GC for example) posed any issues for you so far? Looking at other RDBMS's, languages with manual memory management like C or C++ seems to be the go-to choice, so what were the reasons you chose Go?

I'm quite frankly amazed that Go's runtime is able to support a database with such demanding capabilities as CockroachDB!

4
wmfiv 2 days ago 1 reply      
Are there published benchmarks for multi-key operations and more complex SELECT statements? I apologize if I missed them.

I'm trying to determine whether there's a place for Cockroach within what I think are the constraints in the database space.

* Traditional SQL Databases

 - Go to solution for every project until proven otherwise. - Battle tested and unmatched features. - Hugely optimized with incredible single node performance. - Good replication and failover solutions.
* Cassandra

 - Solved massive data insert and retention. - Battle tested linear scalability to thousands of nodes. - Good per node performance. - Limited features.
It seems like many new databases tend to suffer from providing scale out but relatively poor per node performance so that a mid-size cluster still performs worse than a single node solution based on a traditional SQL database.

And if you genuinely need huge insert volumes, because of the per node performance you'd need an enormous cluster whereas Cassandra would deal with it quite comfortably.

5
sixdimensional 2 days ago 2 replies      
How does Cockroach efficiently handle the shuffle step when data is on many nodes on the cluster and has to move to be joined? Does Cockroach need high capacity network links to function well?

I always see companies making the claim of linear speedup with more nodes but surely that can't be the case if the nodes are geographically disjointed over anything less than gigabit links? Perhaps linear speedup with more nodes is only possible over high speed connections? How high is that exactly?

Congratulations to the team on the release! Introducing this kind of database is no easy task - thank you and great job, keep up the good work!

6
vtomasr5 2 days ago 0 replies      
I think this is the DB Project of the year in the open source community. Cockroachlabs has done an incredible effort to develop and test a new Database and these guys are giving it for free (I read about the series B raise too ;)), for us to use it.

Thanks for doing this. You're very much appreciated.(BTW I love the name and the logo!!)

7
toddmorey 2 days ago 1 reply      
There was a great session with Spencer Kimball (CockroachDB creator) and Alex Polvi (CoreOS) at the OpenStack Summit. It's a good overview and demo: https://youtu.be/PIePIsskhrw
8
daliwali 2 days ago 2 replies      
CockroachDB looks like a great alternative to PostgreSQL, congrats to the team for doing so much in such a short time. The wire protocol is compatible with Postgres, which allows re-using battle-tested Postgres clients. However it's a non-starter for my use case since it lacks array columns, which Postgres supports [0]. I also make use of fairly recent SQL features introduced in Postgres 9.4, but I'm not sure if there are major issues with compatibility.

[0] https://github.com/cockroachdb/cockroach/issues/2115

9
v_elem 2 days ago 1 reply      
It looks like there is still no mechanism for change notification, which in our particular case is the only missing feature that prevents using it as a postgresql replacement.

Does anybody know if this feature is planned in the short or medium term ?

https://github.com/cockroachdb/cockroach/issues/6130https://github.com/cockroachdb/cockroach/issues/9712

10
sergiotapia 2 days ago 2 replies      
Is Cockroach DB intended for just "big-data" companies? Would a small project run really well with Cockroach DB?

Of course a small database probably won't need a lot of the unique features, but is this aiming to replace PG/MySQL in the small/mid-size projects?

11
nik736 2 days ago 3 replies      
What advantages do I have using Cockroach compared to Postgres, Cassandra, Rethink or MongoDB? (I know that all of them are completely different, that's part of the question)
12
apognu 2 days ago 1 reply      
I've been following CockroachDB for quite a while. Great job on 1.0.

I've had a question for quite some time though (and I think there is an RFC for it on GitHub): do we still need to have a "seed node" that is run without the --join parameter, or can we run all the nodes with the same command line, with the cluster waiting for quorum to reconcile on its own?

13
therealmarv 2 days ago 4 replies      
Does this work theoretically interplanetary (just asking because for science) ?
14
gred 1 day ago 1 reply      
Very interesting. I have to admit I've seen the product name a few times, but never took the time to have a look. I do have a few questions, though, if any of the engineering team are still around watching the discussion :-)

From the high availability page [1] in the docs:

> Cross-continent and other high-latency scenarios will be better supported in the future.

Do you have a specific timeline in mind? I've been working on an application that needs to be highly-available, and which uses Oracle right now. It seems like you can add all sorts of tools to the mix (RAC, DataGuard, etc), but there are always significant caveats around the capabilities of the resultant system. We're talking 1 to 2 TB of data total, tables of up to 100 million rows with 1 million rows added per day, distributed across three data centers (US, EU, Asia).

And regarding high availability in the context of application deployments, is there any documentation on the locking characteristics of DDL statements? I'm interested in the ability to modify the schema during an application deployment without having to bring down the system or implicitly locking users out. Apologies if I missed it somewhere on the website!

[1] https://www.cockroachlabs.com/docs/high-availability.html

15
misterbowfinger 2 days ago 1 reply      
Can someone give a brief pros/cons between Cockroach DB Core and Google Cloud Spanner?
16
ericb 2 days ago 2 replies      
Can Cockroach be plugged into a Rails app where mysql was?

I'd be interested in hearing:

- the backup story

- the replication/failover story

- horizontal scaling story (is it plug and play)

17
Gurrewe 2 days ago 1 reply      
Congratulations to the team on the relase!

Everything under "The Future" really excites me, especially the geo-partitioning features. That is something that I'm really looking forward to be using!

18
v3ss0n 2 days ago 2 replies      
Will there be a rethinkdb style REALTIME Changefeed or PostgreSQL's Listen Notify ?
19
doanerock 1 day ago 1 reply      
Since CockroachDB is Eventually Consistent Reads then how would that affect my SaaS multiuser application? How long on average would I have to wait for them to become Consistent?
20
nathell 2 days ago 1 reply      
I read the announcement, got all excited, then clicked "What's inside CockroachDB Core?" and got rewarded with a 404. Ouch! This itches.
21
gog 2 days ago 2 replies      
Slightly offtopic, but what do you use for your blog and documentation pages?
22
api 2 days ago 1 reply      
About nine months ago we made the decision to go with RethinkDB for our infrastructure in place of PostgreSQL (at least for live replicated data), but if this existed at the time we'd have seriously taken a look. We're pretty happy with RethinkDB but I plan on still taking a look at this so we have a backup option.
23
bish2 1 day ago 2 replies      
I'm struggling to understand how this company has raised $50 million dollars when db companies with paying customers like RethinkDB and FoundationDB had to shut down.

They are gonna earn back $50 million by selling...a backups tool?

24
MichaelBurge 2 days ago 1 reply      
It probably scales but how is the performance? If I need to load a couple billion rows and do a dozen joins in some analytics, is that one machine, a dozen, or 100?

Is it more for web apps, analytics, or what? When would I consider switching from e.g. Postgres to CockroachDB?

25
doanerock 1 day ago 1 reply      
Say you scaled up to 100 nodes for the holiday season, is there any way to tell how many/much storage/nodes you have to keep running in order to keep 3 backups and maintain your new post holiday load?
26
bfrog 2 days ago 0 replies      
Should've gone with tardigrade instead as a name, those little bastards can live in space!
27
nhumrich 2 days ago 3 replies      
Does the replication work cross-region, say US-East and US-West? or even cross continent? It sounds like the timing requires very short latency and might not work in these scenarios
28
v3ss0n 2 days ago 1 reply      
Congrats Ben Darnell and team! I am fan of his work on Tornado web server!
29
singularjon 2 days ago 0 replies      
How does the speed compare to that of Postgresql and MongoDB?
30
wtf_is_up 2 days ago 1 reply      
Does CockroachDB have a streaming API a la RethinkDB changefeeds? This is a killer feature, IMO.
31
acd 2 days ago 0 replies      
Congrats to bringing out 1.0 bern following the project and look forward to try it out!
32
amq 2 days ago 1 reply      
Can someone explain how is/can it be better than MariaDB Galera or MySQL Group Replication?
33
raarts 2 days ago 1 reply      
On a three node cluster will it survive two nodes going down?
34
brightball 2 days ago 1 reply      
How does it compare to Couchbase with it N1QL?
35
daxfohl 1 day ago 1 reply      
Curious why Mac is better supported than Windows. This is obviously something you'd run on a server. Do orgs run Mac servers? Is it just to support dev work for people too lazy to launch a VM? Sorry, Windows/Linux ops person here with very little awareness of Mac ecosystem.
36
ncrmro 1 day ago 0 replies      
Any support for postgres trigram searches?
37
xmichael99 2 days ago 1 reply      
Now if we could get a 1.0 of TiDB ???
38
newsat13 2 days ago 7 replies      
Very disappointed with HN turning into a 4chan/reddit style trolling board about the name. Guys, we get it that you don't like the name. Can we please stop bike shedding and move on? The people at cockroachdb have obviously seen all your messages but decided it's worth keeping the name. What more is there to talk about? Why not talk about the relative technical merits of this DB?
39
anthonylebrun 2 days ago 5 replies      
Since there's a little side riff about the name going on I thought I'd throw in my 2 cents. Personally I love the name. I think it does a great job of conveying the spirit of the project and provides unlimited pun opportunities. Plus it's memorable, just like a real life roach encounter. Unfortunately I'm sure some people will discriminate against your DB on the basis of name alone. That's ludicrous, but that's our species for ya.
40
johnwheeler 2 days ago 11 replies      
I think the name "Cockroach" was a really poor decision from a marketing standpoint. The team intended to convey durability, since cockroaches can live through anything. But when I think of a cockroach, I think, gross, disgusting, etc.
41
sandstrom 2 days ago 0 replies      
I think it's an excellent name!

Also, biologists would argue that cockroaches is a magnificent creature, highly adaptable and very fit (in 'survival of the fittest' terms).

I would pay for and deploy a cockroach db because of its name.

42
ccallebs 2 days ago 2 replies      
First, this is awesome! Congrats to the team for reaching this milestone.

Secondly, I think the name is memorable and conveys exactly what it should. If I were ever on an engineering team that chose not to use CockroachDB due to being "grossed out" by the name, I wouldn't be on that engineering team for long. Perhaps someone can explain the knee-jerk reaction to it for me.

43
triangleman 2 days ago 0 replies      
Name doesn't bother me. It's memorable and I'd definitely consider using it, whether in a startup or enterprise. Better than "Postgres" -- how do you even pronounce that?
44
cwisecarver 2 days ago 8 replies      
Cue the comments stating that no one will use this because the name is bad.
45
deferredposts 2 days ago 1 reply      
In a couple of years, I suspect that they will rebrand their name to just "RoachDB". It conveys the same meaning, while not being that awkward to discuss with users/clients
46
socmag 2 days ago 3 replies      
Clocks are meaningless under load.

The higher frequency the transactions the more you get into quantum physics.

In reality, nobody cares if T-Mobile debited your account 0.01ms before WalMart.

[edit] what is important is isolation and consistency of the transactons.

47
Perignon 2 days ago 0 replies      
Name still sucks and is disgusting af.
48
niceperson 2 days ago 1 reply      
>Cockroach

What were they thinking?

49
whatnotests 2 days ago 1 reply      
/me forks the damned repo, renames it, wins the Internet.
8
Uncensorable Wikipedia on IPFS ipfs.io
683 points by bpierre  4 days ago   247 comments top 26
1
cjbprime 4 days ago 11 replies      
Strategically, this (advertising IPFS as an anti-censorship tool and publishing censored documents on it and blogging about them) doesn't seem like a great idea right now.

Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.

We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.

But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.

2
badsectoracula 4 days ago 4 replies      
Correct me if i'm wrong, but if accessing some content through IPFS makes you a provider for that content doesn't that mean that you are essentially announcing to the world that you accessed the content, which in turn can be used by those who do not want you to access it for targeting you?

In other words, if someone from Turkey (or China or wherever) uses IPFS to bypass censored content, wouldn't it be trivial for the Turkish/Chinese/etc government to make a list with every single person (well, IP) that accessed that content?

3
smsm42 4 days ago 1 reply      
Ironically, I've just discovered that https://ipfs.io/ has certificate signed by StartCom, known for being source of fake certificates for prominent domains[1]. So in order to work around censorship, I have to go to site which to establish trust relies on a provider known for providing fake certificates. D'oh.

[1] https://en.wikipedia.org/wiki/StartCom#Criticism

4
k26dr 4 days ago 1 reply      
The following command will allow you to pin (ie. seed/mirror) the site on your local IPFS node if you'd like to contribute to keeping the site up:

ipfs pin add QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX

5
mirimir 4 days ago 0 replies      
Some additional information may help in the duty vs prudence debate. It's true that IPFS gateways can be blocked. But as noted, anyone can create gateways, IPFS works in partitioned networks, and content can be shared via sneakernet. Content can also be shared among otherwise partitioned networks by any node that bridges them.

For example, it's easy to create nodes on both the open Internet and the private Tor OnionCat IPv6 /48. That should work for any overlay network. And once nodes on such partitioned networks pin content, outside connections are irrelevant. Burst sharing is also possible. Using MPTCP with OnionCat, one can reach 50 Mbps via Tor.[0,1]

0) https://ipfs.io/ipfs/QmUDV2KHrAgs84oUc7z9zQmZ3whx1NB6YDPv8ZR...

1) https://ipfs.io/ipfs/QmSp8p6d3Gxxq1mCVG85jFHMax8pSBzdAyBL2jZ...

6
TekMol 4 days ago 3 replies      
How is Wikipedia censored in Turkey? Are providers threatened to be punished if they resolve DNS queries for wikipedia.org? Or are they threatened to be punished if they transport TCP/IP packets with IPs that belong to Wikipedia?

Wouldn't both be trivial to go around? For DNS, one could simply use a DNS server outside Turkey. For TCP/IP packets, one could set up a $5 proxy on any provider from around the world.

7
eberkund 4 days ago 2 replies      
These distributed file systems are really interesting. I'm curious to know if there is anything in the works to also distribute the compute and database engines required to host dynamic content. Something like combining IPFS with Golem (GNT).
8
kibwen 4 days ago 7 replies      
But Wikipedia allows user edits, and so is inherently censorable. You don't need to block the site, you can just sneak in propaganda a little at a time.
9
treytrey 4 days ago 4 replies      
I'm not sure this thought makes sense, but just putting it out there for rebuttals and to understand what is really possible:

I assume IPFS networks can be disrupted by a state actor and only thing that a state actor like the US may have some trouble with is strong encryption. I assume it's also possible that quantum computers, if and when they materialize at scale, would defeat classical encryption.

So my point in putting forward these unverified assumptions is to question whether ANY technology can stand in the way of a determined, major-world-power-type state actor. Personally, I have no reason to believe that's realistic, and all these technologies are toys relative to the billions of dollars in funding that the spy agencies receive.

10
Spooky23 4 days ago 0 replies      
Why bother with a technological anti-censorship solution for Wikipedia when the obvious solution is to just attack the content directly.

If a censoring body wants some information gone, just devote some attention to lobbying the various gatekeepers in Wikipedia.

11
DonbunEf7 4 days ago 2 replies      
Isn't IPFS censorable? That's the impression I got from this FAQ entry: https://github.com/ipfs/faq/issues/47
12
BradyDale 2 days ago 0 replies      
Thanks for sharing this... FWIW, I wrote a story about it on Observer.comhttp://observer.com/2017/05/turkey-wikipedia-ipfs/
13
y7 4 days ago 1 reply      
Does IPFS work properly with Tor these days? Last I checked support was experimental at best.

Without proper support of an anonymity overlay, using Tor to get around your government's censor doesn't sound like a very wise idea.

14
pavement 4 days ago 3 replies      
Listen, I get that there are other parts of the world experiencing serious "technical difficulties" lately...

But I can only read English! Where's the English version?

https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34is...

This hash doesn't do much for me:

 QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX
How do I find the version I want?

If I can't read it in my language, it's still censored for me.

15
slitaz 4 days ago 1 reply      
Didn't you mean "unblockable" instead?
16
maaaats 4 days ago 1 reply      
When browsing the content, how does linking work? I mean, don't they kinda have to link to a hash? But how can they know the hash of a page when the links of that page are dependent on the other pages and this may be a circle?
17
hd4 4 days ago 4 replies      
Maybe a very dumb question, but why didn't they build anonymity into it rather than advise users to route it over Tor? My guess is it may have something to do with the Unix philosophy. It's still a great tool regardless.
18
LoSboccacc 4 days ago 1 reply      
> In short, content on IPFS is harder to attack and easier to distribute because its peer-to-peer and decentralized.

> port 4001 is what swarm port IPFS uses to communicate with other nodes

uhm.

19
captn3m0 4 days ago 4 replies      
The SSL cert chain is broken for me.
20
amelius 4 days ago 1 reply      
Sounds good, but isn't this a fork of Wikipedia?
21
forvelin 4 days ago 2 replies      
At this moment, it is enough to use Google DNS or some VPN to reach Wikipedia in Turkey. This is good case, but IPFS is just an overkill.
22
awqrre 4 days ago 0 replies      
until they create laws...
23
davidcollantes 4 days ago 1 reply      
Will it be available if the domain (ipfs.io) stops resolving, gets seised or is blocked?
24
nathcd 4 days ago 2 replies      
I'd be really curious to hear more about how Goal 2 (a full read/write wikipedia) could work.

IIRC, writing to the same IPNS address is (or will be?) possible with a private key, so allowing multiple computers to write to files under an IPNS address would require distributing the private key for that address?

Also, I wonder how abuse could be dealt with. I've got to imagine that graffiti and malicious edits would be much more rampant without a central server to ban IPs. It seems like a much easier (near-term) solution would be a write-only central server that publishes to an (read-only) IPNS address, where the load could be distributed over IPFS users.

25
devsigner 4 days ago 0 replies      
Here it is on Archive.is just for good measure and posterity purposes: https://archive.is/GnjGT
26
onetwoname 3 days ago 0 replies      
How about you remove all the lies from wikipedia, the lies curated by CIA. No? Oh, right, I forgot you only make the illusion of justice.
9
CPU Utilization is Wrong brendangregg.com
607 points by dmit  3 days ago   91 comments top 34
1
faragon 3 days ago 3 replies      
I respect Brendan, and although it is an interesting article, I have to disagree with him: The OS tells you about OS CPU utilization, not CPU micro-architecture functional unit utilization. So if the OS uses a CPU for running code until a physical interrupt or a software trap happens, in that period the CPU has been doing work. Unless the CPU could be able to do a "free" context switch to a cached area not having to wait for e.g. a cache miss (hint: SMT/"hyperthreading" was invented exactly for that use case), the CPU would be actually busy.

If in the future (TM) using CPU performance counters for every process becomes really "free" (as in "gratis" or "cheap"), the OS could report bad performing processes because the reasons exposed in the article (low IPC indicating poor memory access patterns, unoptimized code, code using too small buffers for I/O -causing system performance degradation because excessive kernel processing time because-, etc.), showing the user that despite having high CPU usage, the CPU is not getting enough work done (in that sense I could agree with the article).

2
glangdale 3 days ago 2 replies      
The problem is that IPC is also a crude metric. Even leaving aside fundamental algorithmic differences, an implementation of some algorithm with IPC of 0.5 is not necessarily faster than an implementation that somehow manages to hit every execution port and deliver an IPC of 4.

I can improve IPC of almost any algorithm (assuming it is not already very high) by slipping lots of useless or nearly useless cheap integer operations into the code.

People always tell you "branch misses are bad" and "cache misses are bad". You should always ask: "compared to what"? If it was going to take you 20 cycles worth of frenzied, 4 instructions per clock, work to calculate something you could keep in a big table in L2 (assuming that you aren't contending for it) you might be better off eating the cache miss.

Similarly you could "improve" your IPC by avoiding branch misses (assuming no side effects) by calculating both sides of a unpredictable branch and using CMOV. This will save you branch misses and increase your IPC, but it may not improve the speed of your code (if the cost of the work is bigger than the cost of the branch misses).

3
dekhn 3 days ago 1 reply      
IPC is amazing. We had some "slow" code, did a little profiling, and found that a hash lookup function was showing very low IPC about half the time. Turns out, the hash table was mapped across two memory domains on the server (NUMA) and the memory lookup from one processor the other processors memory was significantly slower.

perf on a binary that is properly instrumented (so it can show you per-source-line or per-instruction data) is really ghreat.

4
inetknght 3 days ago 2 replies      
I use `htop` for all of my Linux machines. It's great software. But one of my biggest gripes is that "Detailed CPU Time" (F2 -> Display options -> Detailed CPU time) is not enabled by default.

Enabling it allows you to see a clearer picture of not just stalls but also CPU steal from "noisy neighbors" -- guests also assigned to the same host.

I've seen CPU steal cause kernel warnings of "soft-lockups". I've also seen zombie processes occur. I suspect they're related but it's only anecdotal: I'm not sure how to investigate.

It's pretty amazing what kind of patterns you can identify when you've got stuff like that running. Machine seems to be non-responsive? Open up htop, see lots of grey... okay so since all data is on the network, that means that it's a data bottleneck; over the network means it could be bottlenecked at network bandwidth or the back-end SAN could be bottlenecked.

Fun fact: Windows Server doesn't like not having its disk IO not be serviced for minutes at a time. That's not a fun way to have another team come over and get angry because you're bluescreening their production boxes.

5
nimos 3 days ago 2 replies      
Perf is fascinating to dive into. If you are using C and gcc you can use record/report that show you line by line and instruction by instruction where you are getting slowdowns.

One of my favorite school assignments was we were given an intentionally bad implementation of the Game of Life compiled with -O3 and trying to get it to run faster without changing compiler flags. It's sort of mind boggling how fast computers can do stuff if you can reduce the problem to fixed stride for loops over arrays that can be fully pipelined.

6
valarauca1 3 days ago 0 replies      
We are what we measure.

Very true that 100% CPU Utilization is often waiting on bus traffic (loading caches, loading ram, loading instructions, decoding instructions) only rarely is the CPU _doing_ useful work.

The context of what you are measuring depends if this is useful work or not. The initial access of a buffer almost universally stalls (unless you prefetched 100+ instructions ago). But starting to stream this data into L1 is useful work.

Aiming for 100%+ IPC is _beyond_ difficult even for simple algorithms and critical hot path functions. You not only require assembler cooperation (to assure decoder alignment), but you need to know _what_ processor you are running on to know the constraints of its decoder, uOP cache, and uOP cache alignment.

---

Perf gives you ability to cache per PID counters. Generally just look at Cycles Passed vs Instructions decoded.

This gives you a general overview of stalls. Once you dig into IPC, front end stalls, back end stalls. You start to see the turtles.

7
exabrial 2 days ago 0 replies      
Your CPU will execute a program just as fast at 5% than as 75%.

We honestly need a tool that compares I/O, memory fetch, cache-miss, TLB misses, page-outs, CPU Usage, interrupts, context-swaps, etc all in one place.

8
prestonbriggs 3 days ago 0 replies      
At Tera, we were able to issue 1 instruction/cycle/CPU. The hardware could measure the number of missed opportunities (we called them phantoms) over a period of time, so we could report percent utilization accurately. Indeed, we could graph it over time and map periods of high/low utilization back to points in the code (typically parallel/serial loops), with notes about what the compiler thought was going on. It was a pretty useful arrangement.
9
alain94040 2 days ago 2 replies      
The article is interesting, but IPC is the wrong metric to focus on. Frankly, the only thing we should care about when it comes to performance is time to finish a task. It doesn't matter if it takes more instructions to compute something, as long as it's done faster.

The other metric you can mix with execution time is energy efficiency. That's about it. IPC is not a very good proxy. Fun to look at, but likely to be highly misleading.

10
heinrichhartman 2 days ago 0 replies      
It seems to me that the CPU utlization metric (from /proc/stat) has far more problems than misreporting memory stalls.

As far as I understand it, the metric works as follows: At every clock interrupt (every 4ms on my machine) the system checks which process is currently running, before invoking the scheduler:- If the idle process idle time is accounted.- Otherwise the processer is regarded as utilized.

(This is what I got from reading the docs, and digging into the source code. I am not 100% confident I understand this completely at this point. If you know better please tell me!)

There are many problems with this approach:Every time slice (4ms) is accounte either as completely utilized on completely free. There are many reasons for processes going on CPU or off CPU outside of clock interrupts. Blocking syscalls are the most obvious one.In the end a time slice might be utilized by multiple different processes and interrupt handlers but if at the very end of the time slice the idle thread is scheduled on CPU the whole slice is counted as idle time!

See also:https://github.com/torvalds/linux/blob/master/Documentation/...

11
deathanatos 3 days ago 0 replies      
There's also loadavg. I've encountered a lot of people who think that a high loadavg MUST imply a lot of CPU use. Not on Linux, at least:

> The first three fields in this file are load average figures giving the number of jobs in the run queue (state R) or waiting for disk I/O (state D) averaged over 1, 5, and 15 minutes.

Nobody knows about the "or waiting for disk I/O (state D)" bit. So a bunch of processes doing disk I/O can cause loadavg spikes, but there can still be plenty of spare CPU.

12
westurner 2 days ago 1 reply      
Instructions per cycle: https://en.wikipedia.org/wiki/Instructions_per_cycle

What does IPC tell me about where my code could/should be async so that it's not stalled waiting for IO? Is combined IO rate a useful metric for this?

There's an interesting "Cost per GFLOPs" table here:https://en.wikipedia.org/wiki/FLOPS

Btw these are great, thanks: http://www.brendangregg.com/linuxperf.html

( I still couldn't fill this out if I tried: http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools... )

13
jarpineh 2 days ago 0 replies      
By clicking through some links on the article I found this:http://www.brendangregg.com/blog/2014-10-31/cpi-flame-graphs...

Now I wonder how easy and manual work it would be to do these combined flamegraphs with CPI/IPC information? My cursory search didn't find nary a mention after 2015... Perhaps this is still hard and complicated.

To me it seems really useful to know why a function takes so long to work (waiting or calculating) and not "merely" how long it takes. Even if the information is not perfectly reliable nor can't be measured without effect on execution.

14
surki 2 days ago 0 replies      
Another related tool I found interesting: perf c2c

This will let us find the false sharing cost (cache contention etc).

https://joemario.github.io/blog/2016/09/01/c2c-blog/

15
joosters 2 days ago 0 replies      
I can't see a mention of it here, or on the original page, so IMO it's worth pointing out a utility that you will most likely already have installed on your Linux machine: vmstat. Just run:

 vmstat 3
And you'll get a running breakdown of CPU usage (split into user/system), and a breakdown of 'idle' time (split into actual idle time and time waiting for I/O (or some kinds of locks).

The '3' in the command line is just how long the stats are averaged over, I'd recommend using 3+ to average out bursts of activity on a fairly steady-state system.

16
jeevand 3 days ago 1 reply      
Interestingly IPCs are also used to verify new chipsets in embedded companies. Run the same code with newer generation chipset and see if IPC is better than the previous. IPCs are one of the main criteria if the new chipset is a hit or miss (others are power..)
17
glandium 3 days ago 1 reply      
I didn't know about tiptop, and it sounds interesting. Running it, though, it only shows "?" in Ncycle, Minstr, IPC, %MISS, %BMIS and %BUS colums for a lot of processes, including for, but not limited to, Firefox.
18
toast0 3 days ago 0 replies      
CPU util might be misleading, but cpu idle under a threshold at peak [1] means you need more idle cpu and you can get that by getting more machines, getting better machines, or getting better code.

Only when I'm trying to get better code, do I need to care about IPC, and cache stalls. I may also want better code to improve the overall speed of execution, too.

[1] (~50% if you have a pair of redundant machines and load scales nicely, maybe 20% idle or even less if you have a large number of redundant machines and the load balances easily over them)

19
deegu 2 days ago 0 replies      
CPU frequency scaling can also lead to somewhat unintuitive results. On few occasions I've seen CPU load % increasing significantly after code was optimized. Optimization was still actually valid, and the actual executed instructions per work item went down, but the CPU load % went up since OS decided to clock down the CPU due to reduced workload.
20
gpvos 3 days ago 0 replies      
The server seems overloaded (somewhat ironically). Try http://archive.is/stDR0 .
21
xroche 2 days ago 1 reply      
> You can figure out what %CPU really means by using additional metrics, including instructions per cycle (IPC)

Correct me if I am wrong, but this won't work for spinlocks in busy loops: you do have a lot of instructions being executed, but the whole point of the loop is to wait for the cache to synchronize, and as such, this should be taken as "stalled".

22
jwatte 3 days ago 1 reply      
I think thinking about the CPU add mainly the ALU seems myopic.The job of the CPU is to get data into the right pipeline at the right time. Waiting for a cache miss means it's busy doing its job. Thus, CPU busy is a reasonable metric the way it is currently defines and measured. (After all, the memory controller is part of the CPU these days.)
23
kazinator 2 days ago 1 reply      
This article is not as silly as it could be.

Let me help.

Look, CPU utilization is misleading. Did you forget to use -O2 when compiling your code? Oops, CPU utilization is now including all sorts of wasteful instructions that don't make forward progress, including pointless moves of dead data into registers.

Are you using Python or Perl? CPU utilization is misleading; it's counting all that time spent on bookkeeping code in the interpreter, not actually performing your logic.

CPU utilization also measures all that waste when nothing is happening, when arguments are being prepared for a library function. Your program has already stalled, but the library function hasn't started executing yet for the silly reason that the arguments aren't ready because the CPU is fumbling around with them.

Boy, what a useless measure.

24
gens 3 days ago 1 reply      
The core waiting for data to be loaded from RAM is busy. Busy waiting for data.

Instructions per cycle can also be misleading. Modern cpu's can do multiple shifts per cycle, but something like division takes a long time.

It all doesn't matter anyway, as instructions per cycle does not tell you anything specific. Use the cpu-builtin performance counters, use perf. It basically works by sampling every once in a while. It (perf, or any other tool that uses performance counters) shows you exactly what instructions are taking up your processes time. (hint: it's usually the ones that read data from memory; so be nice to your caches)

It's not rocket surgery.

25
alkonaut 2 days ago 0 replies      
Is there any easy way to do profiling that reveals stalled cpu becasue of pointer chasing, for "high level devs" on windows?
26
taeric 3 days ago 1 reply      
This is silly. The conceit that ipc is a simplification for "higher is better" is exactly the problem he has with utilization.

True, but useful? Most of us are busy trying to get writes across a networked service. Indeed, getting to 50% utilization is often a dangerous place.

For reference, running your car by focusing on rpm of the engine is silly. But, it is a very good proxy and even more silly to try and avoid it. Only if you are seriously instrumented is this a valid path. And getting that instrumented is not cheap or free.

27
heisenbit 2 days ago 0 replies      
Any way to do something equivalent on OSX?
28
buster 2 days ago 0 replies      
This was very enlightening. I have the highest respect for Brendan and his insights, i must say.
29
JohnLeTigre 3 days ago 0 replies      
or your code could be riddled with thread contentions

I guess this is why he used the term likely

Interesting article though

30
willvarfar 3 days ago 0 replies      
Using IPC as a proxy for utilization is tricky because an out-of-order machine can only get that max IPC if the instructions it is executing are not dependent on not-yet-computed instructions.

In-order CPUs are much easier to reason about; you can literally count the stalled cycles.

31
spullara 2 days ago 0 replies      
Need a new metric "CPU efficiency".
32
nhumrich 3 days ago 1 reply      
Totally disagree with the premise of the article. Every metric tool that i know of that shows cpu utilization correctly shows cpu work. Load on the other hand represents cpu and iowait (overall system throughput). Io wait is also exposed in top as the "wait" metric. An amazon EC2 box can very easily get to load(5) = 10 (anything above 1 is considered bad), but the cpu utilization metric will still show almost no cpu util.
33
flamedoge 3 days ago 2 replies      
> If your IPC is < 1.0, you are likely memory stalled,

depends on the workload.

34
tjoff 3 days ago 1 reply      
Well, this is the reason I hate HyperThreading, does your app consume 50% or 100% - with hyperthreading you have no clue.

And that is per core, it becomes increasingly meaningless on a dualcore and on a quadcore and above you might as well replace it with MS Clippy.

And this is before discussing what that percentage really means.

edit: I'm interpreting the downvotes that people are in denial about this ;)

10
Visual Studio for Mac visualstudio.com
526 points by insulanian  2 days ago   310 comments top 47
1
0x0 2 days ago 5 replies      
I find the naming "Visual Studio for Mac" pretty deceptive, since apparently it is not anything like the win32 VS environment, but instead based on Xamarin Studio. Even the tagline is deceptive: "The IDE you love, now on the Mac".

I would guess this won't let you build/debug win32 or winforms or wpf applications, or install any .vsix extensions from the visual studio marketplace (of which there are lots of useful ones, such as this one to manage translations - https://marketplace.visualstudio.com/items?itemName=TomEngle... ) - correct me if I'm wrong, but if I can't install my .vsix extensions, this is not "the IDE you love, now on the Mac".

2
jot 2 days ago 5 replies      
Almost 10 years since I exchanged emails with Steve Ballmer about this: https://medium.com/@jot/me-and-steve-ballmer-in-2007-68456a5...
3
fotbr 2 days ago 7 replies      
Since there's a PM here from Microsoft, I've got a couple questions regarding the requirement to "sign in with your Microsoft account":

With all your branding changes over the years, what's considered a Microsoft account today? My old Hotmail account, that existed from the days before Microsoft bought Hotmail? I think it's still alive, but I haven't logged in in the better part of a decade to find out. The accounts created over the years for various Xbox machines? I think those are still around, but I doubt I could get into them at this point. The "Live" account I had to create for MSDN many years ago? Once that job and associated need for MSDN ended I've not logged in to see if it's still around.

Which one(s) should I try to find login information for to use?

Furthermore, why must I sign in in the first place for the free version? I can understand signing in to associate the install with a paid version with extra features, but I see no reason to require it for free versions without any paid features.

4
srcmap 2 days ago 4 replies      
I used to be a big VB, VC++ fan boy a long time ago. 1995 :-) Have since move on....

Tried built a few opensource apps with VS once a year for the past few years and found that I can't even compile a single Windows open source packages from github, sourceforge after weeks of trying.

The code might claim to be able to build with VS10, VS12. The dependency libraries will need completely different VS version of .xml, .proj, .sln build systems.

I challenge the PM of VS product try to build a few popular MS projs such as python, VLC, or anything in http://opensourcewindows.org/. Document the process of building the app and dependence library. Compare that to the process of try to build that same packages in Mac (with brew) or in Linux.

In Linux, for all the packages I like play with. "./configure && make" handle most of the the build in a few minutes. Even easier on Ubuntu with apt-get source/build commands. Very similar process in Mac.

Even linux kernel, I can build it easily with pretty much the same 1-2 commands for the past 20 years.

5
satysin 2 days ago 2 replies      
I really wish Microsoft had made UWP cross-platform. Would be pretty amazing if I could use UWP/C# to target Windows, Linux, macOS, iOS and Android properly. With UWP being limited to just Windows I don't see it ever being a success.
6
kraig911 2 days ago 2 replies      
Is this more than just Xamarin? I'm sorry -- I tried last time and that was the impression I got. I know it says it has asp.net core but can I truly build .net web services based apps now without parallels?
7
delegate 2 days ago 3 replies      
Does it support C++ ? To me, "Visual Studio" is about C++ development and I miss a similarly powerful C++ IDE on the Mac.

From what I can see, it only supports C# (and family), so what good is it to a C++ / OSX dev ?

8
holydude 2 days ago 3 replies      
The only problem I have with MS's ecosystem is their love to have a lot of concepts and name for everything. I am literally lost and I do not know what .NET/<whatever> is what and how it is used.

So is this just Xamarin repackaged ?

9
yread 2 days ago 3 replies      
Microsoft Build is becoming an event where hell freezes over lately. VS on Mac, Linux on Windows, open source asp.net and .net, SQL Server on Linux
10
BugsJustFindMe 2 days ago 2 replies      
It would be really nice to have a microsoft rep in here to answer questions. Because what I really want is visual studio that can build C++ win32 MFC executables without having to run Windows in a virtual machine. Can it do that? I don't know.
11
zamalek 2 days ago 2 replies      
> Xamarin

Isn't this just MonoDevelop? Or have Microsoft added secret sauce to the mix?

12
fleshweasel 1 day ago 1 reply      
They're promoting this as a new dev environment for .NET Core, but there's still ZERO tooling for Razor. I tried starting a simple example project and the .cshtml files didn't even have any syntax highlighting, let alone syntax/type checking.

I don't know how you work on cross-platform ASP.NET for this long and still not have the tooling for your templating engine ported.

13
nobleach 2 days ago 0 replies      
I sincerely would LOVE to have an F# development IDE that didn't ask me to install Mono. I don't have anything against Mono, per se, I just want to see that Microsoft officially supports it across the three major platforms.
14
vetinari 2 days ago 1 reply      
Again, online installer only. Did something recently change something, that made difficult to make full, offline installer?

If yes, JetBrains didn't notice, because they are still able to do that for their products.

15
NDT 2 days ago 2 replies      
I don't understand. I've been using VS on Mac for the past 3 months to develop C# applications for a class of mine. Was that just a beta? What's so different about this?
16
keithly 2 days ago 0 replies      
17
jbmorgado 2 days ago 0 replies      
I can't really understand the full depth from the announcement, but to me this looks like something that already existed for a few years, Xamarin.

What are the diferences between this product and Xamarin for MacOS (something that already existed)?

18
blowski 2 days ago 1 reply      
Anyone know what support is planned for other languages? e.g. Go, Ruby, and PHP.
19
legohead 2 days ago 1 reply      
Crashes during install process for me. :\

Looks like during Xamarin installation: /Users/USER/Downloads/Install Visual Studio.app/Contents/MacOS/Install_Xamarin - Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSScrollView heightAnchor]: unrecognized selector sent to instance 0x6080003c0870'

Bummer.

20
mb_72 2 days ago 0 replies      
More good news from the MS / Xamarin camp. A few years ago I 'bet the farm' on using Xamarin for Mac to develop a Mac version of our PC application (with shared code in a PCL); since that time Xamarin (and then MS/Xamarin after the buyout) have rarely failed to impress. Kudos to the team.
21
kapuru 2 days ago 1 reply      
Any .NET MVC developers here? I always wanted to learn ASP MVC, but never did because I was scared of the deployment situation on Linux. Has anything changed in that regard? Would you say deploying a .NET web app works almost as smooth on Linux as let's say a node.js app?
22
zzbzq 2 days ago 0 replies      
Coincidentally I was just using this & Xamarin Studio on mac today. I didn't realize VS Mac had released, I already had the beta.

So far I don't like it as much! Not sure what features are here I actually care about as I'm just using Mono. The pads no longer make sense in VS for Mac. I just have debug pads open all the time. I can't really tell when I've stopped debugging. There's weird buttons on the pads that do nothing. Not sure why all the clutter is here, Xamarin Studio had this stuff figured out.

23
perseusprime11 2 days ago 1 reply      
Visual Studio Code is the way to go on Mac.
24
dohboy 2 days ago 1 reply      
Same strategy as always. Rebrand current products and call it new. This is not Visual Studio as known from Windows but Xamarin Studio rebranded. Title should be Microsoft release Xamarin update...
25
mixedCase 2 days ago 1 reply      
Again? I've seen the announcement for its release three times on HN.

And before someone mentions it, no I'm not confusing it with VS Code. I mean "Visual Studio for Mac", the Xamarin Studio fork.

26
avenoir 2 days ago 0 replies      
Is anybody doing professional development on .NET using VS for Mac? All this time i thought it was just Xamarin tools, but it looks like it actually has .NET Core project templates too. This has been the only thing that kept me away from Macs as a .NET dev.
27
rcarmo 2 days ago 0 replies      
I've been waiting for this for a while. Only trouble so far is that the installer comes up in the wrong locale for me (it ignores the language ordering in Preferences and displays the installer in my secondary/input language, not English, unlike fully native apps).
28
jhasse 2 days ago 1 reply      
This is still using GTK+ (3?), right?

How did they manage to integrate the buttons in the title bar with it?

29
JohnnyConatus 2 days ago 0 replies      
Is VS for Mac recommended for typescript development? I'm using VS code right now.
30
jhwhite 2 days ago 0 replies      
Is this VS? Or just Xamarin? Could I do Python development on it like I can with Win VS?
31
baltcode 2 days ago 1 reply      
Is there any way to run this/download a compatible version for OSX 10.9?
32
gaza3g 2 days ago 0 replies      
I'm currently working on an MVC5 project on .NET 4.6.1 using VS2015 on Windows.

Can I load my solution on VS for Mac and have it work out of the box(restoring nuget packages...etc)?

33
relyks 2 days ago 4 replies      
Will this allow you to make cross-platform Windows Forms applications?
34
EGreg 2 days ago 0 replies      
Can this do PHP and Javascript / Web Development?

Objective C? Swift?

35
nexoman 1 day ago 0 replies      
Wanted to try it but installer crashes on OSX Yosemite 10.10.5
36
Clobbersmith 2 days ago 2 replies      
Is there a reason why the installer is in french? My preferred language is set to english.
37
alex_suzuki 2 days ago 0 replies      
Any chance we're going to get a Hololens development environment for the Mac anytime soon?
38
DeepYogurt 2 days ago 0 replies      
Hit download and get a popup for a free 60 day course. No thanks.
39
genzoman 2 days ago 1 reply      
first rate development experience on mac. MS is slaying it lately.
40
bitmapbrother 2 days ago 0 replies      
Does Visual Studio for Mac have the same functionality as Visual Studio for Windows? If not then they should really stop confusing customers by rebranding a product that had nothing to do with Visual Studio for Windows.
41
exabrial 2 days ago 0 replies      
I don't even know what's real anymore...
42
wkirby 2 days ago 2 replies      
Is the installer in Chinese for anyone else?
43
gaius 1 day ago 0 replies      
Bring back CodeWarrior I say
44
minhoryang 2 days ago 0 replies      
So beautiful!
45
itsdrewmiller 2 days ago 0 replies      
Let me know when this supports .NET 4.
46
adultSwim 2 days ago 1 reply      
47
mcjon77 2 days ago 0 replies      
No lie, when I saw the title of this thread for a few seconds I was confused and wanted to check my calendar. I kept thinking "Is this April 1st?".
11
Waymos lawsuit against Uber is going to trial, judge rules techcrunch.com
467 points by golfer  1 day ago   254 comments top 18
1
aresant 1 day ago 6 replies      
This is a huge precursor to the real blow:

"The decision hints that Alsups pending decision on a preliminary injunction might not be favorable to Uber. . . it could effectively halt Ubers self driving development plans entirely while the trial plays out."

In context of Travis' view "What would happen if we weren't a part of that future? If we weren't part of the autonomy thing? Then the future passes us by basically, in a very expeditious and efficient way," he said." (1)

(1) https://www.google.com/amp/s/amp.businessinsider.com/travis-...

2
kyrra 1 day ago 0 replies      
3
eridius 1 day ago 4 replies      
Buried at the end of the article:

> Update: Judge Alsup has also referred the case to the U.S. Attorney for a possible criminal investigation.

(https://techcrunch.com/2017/05/11/waymos-claims-of-trade-sec...)

4
p49k 19 hours ago 4 replies      
"It is unfortunate that Waymo will be permitted to avoid abiding by the arbitration promise it requires its employees to make."

The idea of forcing people into arbitration has crept into every agreement and contract that Americans enter, and it's out of control. Congress needs to pass a law forbidding contracts from forcing individuals into arbitration.

5
siegel 3 hours ago 0 replies      
After reading Judge Alsup's opinion, I'd be very concerned if I were Uber. Certainly not sympathetic to Uber's arguments.

That being said, I was a bit surprised by his decision. I was one of the attorneys for the prevailing defendant in the Torbit v. Datanyze case that Judge Alsup heavily cites (but disagrees with). Judge Alsup distinguishes that case, in part, on the basis that California state law does not support its holding. But the California Court of Appeals just expressly adopted the holding in Torbit in a case a few weeks ago.

6
sriram_sun 1 day ago 2 replies      
I remember reading on HN that at least one CMU researcher got paid 3 times their annual salary as a sign on bonus to join the self driving car team. I'm wondering how removed the Pittsburgh team was from Otto.
7
cheeze 1 day ago 1 reply      
This is a pretty fascinating scenario. Between this and everything else that Uber is going through at the moment, it seems that they are going to have a very rough year.

Here's to hoping that more people start using competitors so that it's easier for me to find a Lyft as quickly as I can find an uber.

8
meddlepal 1 day ago 2 replies      
Uber should probably be shitting it's pants right about now. Google would have to really bungle this to not get a jury to agree Uber has been acting very shady.
9
SpartanMindset 1 day ago 3 replies      
Always interesting to see two companies with an unlimited lawyer fund go at it in court.
10
steveb0x 1 day ago 8 replies      
Man I hate that this is the end for Uber. I've recently been trialing Lyft and, at least in my area, it always takes longer and is more expensive.

But...if Uber truly is a bunch of scumbags, they deserve to burn.

11
woodandsteel 7 hours ago 0 replies      
Uber is claiming that Waymo has to enter arbitration because Lewandowski had an arbitration agreement with them when he worked there.

Imagine if that was legally correct. Suppose a fellow worked at 5 companies in the course of 10 years, and with each company his contract included an arbitration clause and other interesting items.

Then company #5 would have to honor everything relevant in the previous 4 contracts. What a mess that would be. Alsup was right to reject Uber's claim.

12
icinnamon 1 day ago 2 replies      
The articles keep referring to Waymo at "Waymo LLC". Almost every venture I've seen has been a corporation, not LLC... anything interesting as to why it's a LLC?
13
omarchowdhury 1 day ago 1 reply      
What % would Uber's stock drop today if it were a public company?
14
beedogs 22 hours ago 0 replies      
Fantastic news. The sooner Uber is no longer a company, the better. Possible criminal charges are the icing on the cake.
15
cryptos 22 hours ago 3 replies      
What companies will profit when Uber fails?
16
Abtin88 16 hours ago 1 reply      
hypothetically speaking what's gonna happen if Uber opensource their lidar technology now?
17
easilyBored 20 hours ago 2 replies      
I have a feeling that as soon as the case against Uber is done, Google is going to go after Lewandowski...try to teach him a lesson and send a message to other Googlers thinking about doing the same.

For strategic reasons they might have chosen to go after Uber first

18
ameen 20 hours ago 2 replies      
I don't think Uber will cease to exist as is. I'd like to think it's too big to fail. But Alphabet is no small Corp. They can absolutely crush Uber if they want to.

Would be a shame if this is the end of the road for Uber. For all of their scandals, they've been really bullish on innovation and pushed the envelope on moving the Industry forward. I hope only the guilty are charged instead of Uber as a company having to suffer due to the wrongdoing of a few individuals.

12
Remotely Exploitable Type Confusion in Windows 8, 8.1, 10, Windows Server, etc chromium.org
590 points by runesoerensen  4 days ago   179 comments top 26
1
statictype 4 days ago 8 replies      
NScript is the component of mpengine that evaluates any filesystem or network activity that looks like JavaScript. To be clear, this is an unsandboxed and highly privileged JavaScript interpreter that is used to evaluate untrusted code, by default on all modern Windows systems. This is as surprising as it sounds.

Double You Tee Eff.

Why would mpengine ever want to evaluate javascript code coming over the network or file system? Even in a sandboxed environment?

What could they protect against by evaluating the code instead of just trying to lexically scan/parse it?

(I'm sure they had a reason - wondering what it is)

2
to3m 4 days ago 5 replies      
SourceTree is pretty much unusable on my laptop, because every time it does anything the antimalware service springs into life and uses up anything from 20%-80% of the CPU power available. I've had it take 30 seconds to revert 1 line. It's stupid.

I was very much prepared to blame Atlassian for this, but maybe I need to start thinking about blaming Microsoft instead, because it sounds like they've made a few bad decisions here.

(Still, if my options are this, or POSIX, I'll take this, thanks. Dear Antimalware Service Executable, please, take all of my CPUs; whatever SourceTree is doing, I can surely wait. Also, please feel free to continue to run fucking Javascript as administrator... I don't mind. It's a small price to pay if it means I don't have to think about EINTR or CLOEXEC.)

3
jeffy 3 days ago 0 replies      
Contents of the PoC are a ".zip" file that is actually plain-text (the engine ignores extension/mime types) and contains just this line of JS and 90kb of nonsense JS for entropy.

(new Error()).toString.call({message: 0x41414141 >> 1})

It's hard to imagine MS doesn't receive tons of watson crash reports of MsMpEng from trying to run bits of random JS. If they haven't looked at them, they probably should start now.

4
pierrec 4 days ago 0 replies      
I think this sentence sums up the severity pretty well:

The attached proof of concept demonstrates this, but please be aware that downloading it will immediately crash MsMpEng in its default configuration and possibly destabilize your system. Extra care should be taken sharing this report with other Windows users via Exchange, or web services based on IIS, and so on.

And I think the intended formulation was "care should be taken sharing this report with other Windows users or via Exchange, or web services based on IIS..." (because they're afraid it could crash the servers even if sharing between non-Windows users!)

5
scarybeast 4 days ago 1 reply      
Props on the fast fix; anti-props on running an unsandboxed JavaScript engine at SYSTEM privileges and feeding it files from remote.
6
e12e 3 days ago 1 reply      
Did anyone manage to figure out a simple powershell-incantation to figure out if a system is properly patched/secure?

https://technet.microsoft.com/en-us/library/security/4022344

Simply lists: "Verify that the update is installed

Customers should verify that the latest version of the Microsoft Malware Protection Engine and definition updates are being actively downloaded and installed for their Microsoft antimalware products.

For more information on how to verify the version number for the Microsoft Malware Protection Engine that your software is currently using, see the section, "Verifying Update Installation", in Microsoft Knowledge Base Article 2510781.

For affected software, verify that the Microsoft Malware Protection Engine version is 1.1.10701.0 or later."

As far as I can figure out, if:

Get-MpComputerStatus|where -Property AMEngineVersion -ge [version]1.1.10701.0|select AMEngineVersion

prints something like:

 AMEngineVersion --------------- 1.1.13704.0
according to MS one should be patched-up and good to go? (The command should print nothing on vulnerable systems).

However a hyper-vm last patched before Christmas (it's not networked), lists it's version as: 1.1.12805.0 -- which certainly seems to be a higher version than 1.1.10701.0?

I'll also note that using "[version]x.y.z.a" apparently does not force some kind of magic "version compare"-predicate, based on some simple tests.

Any powershell gurus that'd care to share a one-liner to check if one has the relevant patches installed?

Am I looking at the wrong property?

7
pedrow 3 days ago 3 replies      
Quick question on the timings of this. The report says that "This bug is subject to a 90 day disclosure deadline." - does that mean it was discovered 90 days ago and has been published now, or it was discovered on May 6 (as dates on the comments seem to suggest) and Microsoft has responded very quickly? In either case it seems strange not to have waited a couple more days because (for my system, anyway) I was still running the vulnerable version even after the report was made public.
8
icf80 3 days ago 0 replies      
The affected products:

Microsoft Forefront Endpoint Protection 2010

Microsoft Endpoint Protection

Microsoft Forefront Security for SharePoint Service Pack 3

Microsoft System Center Endpoint Protection

Microsoft Security Essentials

Windows Defender for Windows 7

Windows Defender for Windows 8.1

Windows Defender for Windows RT 8.1

Windows Defender for Windows 10, Windows 10 1511, Windows 10 1607, Windows Server 2016, Windows 10 1703

Windows Intune Endpoint Protection

Last version of the Microsoft Malware Protection Engine affected by this vulnerability Version 1.1.13701.0

First version of the Microsoft Malware Protection Engine with this vulnerability addressed Version 1.1.13704.0

https://technet.microsoft.com/en-us/library/security/4022344

9
arca_vorago 3 days ago 2 replies      
I'm pretty close to just saying saying I refuse to work on Windows systems anymore.
10
NKCSS 3 days ago 1 reply      
Turn of Windows Defender:

 Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender] "DisableAntiSpyware"=dword:00000001
Then reboot.

On the other hand: Microsoft has already issued a fix: https://twitter.com/msftsecresponse/status/86173436019355238...

But still; the auto-unpack of archives leaves me wanting to just disable it completely.

11
jonstewart 3 days ago 0 replies      
Does MsMpEng actually do file analysis itself, unpacking, unarchiving, &c? That's the kind of stuff that should usually be sandboxed. If its zip/rar/7zip/cab/whatever support hasn't been formally verified and those components run as SYSTEM, es no bueo.
12
windsurfer 3 days ago 0 replies      
This also includes Windows 7 and anything running Microsoft Security Essentials, but does not include any Windows Server other than 2016.
13
dboreham 4 days ago 2 replies      
It only took two days to fix this and release the patch? Impressed.
14
caf 4 days ago 1 reply      
As mpengine will unpack arbitrarily deeply nested archives...

Surely not - what happens if you feed it the zipfile quine?

15
dagaci 3 days ago 2 replies      
I am not happy that Google has published a full exploit well before it has been possible to anyone to actually deploy the patch and within just 3 days of notifying the vendor.

It seems that Google is eager for someone to use this exploit to attack as many systems as possible before they can be patched against it.

16
ezoe 3 days ago 1 reply      
So MS's anti malware software does:

1. Execute NScript, a JavaScript-like language.

2. Run as high privileged, non-sandboxed process.

3. Intercept filesystem changes and run NScript code written to anywhere, including browser cache.

4. Do not check code signing.

This is unbelievably ridiculous. It shall not happen to the software which claims to improve "security".

As I always said, there is no good anti malware software. Everything sucks.

An additional software is an additional security risk.

17
rubatuga 3 days ago 1 reply      
Congratulations Microsoft, on your best exploit yet!
18
btb 3 days ago 0 replies      
Good that it was fixed. But now bad actors will be looking very hard for other bugs in the unsandboxed javascript interpreter. Tempting to just disable windows defender completely.
19
jbergstroem 4 days ago 2 replies      
Exploitability Assessment forLatest Software Release: 2 - Exploitation Less Likely

Exploitability Assessment forOlder Software Release: 2 - Exploitation Less Likely

Anyone with ideas on how they came to this conclusion? Yes, I read the linked document but felt that the index assessment didn't really reflect that google (Natalie?) seems to have found this "in the wild".

20
polskibus 3 days ago 0 replies      
I wonder how does it affect Azure? Can such security hole affect Azure security?
21
binome 3 days ago 1 reply      
At least the good guys found this one first, and it is in Windows Defender, and the definitions should automatically update in 24hrs or less silently without a reboot.
22
ms_skunkworks 3 days ago 0 replies      
Was mpengine developed by Microsoft Research?
23
nathan_f77 3 days ago 1 reply      
This is amazing work. Does anyone know how much someone like Tavis Ormandy would be getting paid? Would it be 7 figures?
24
nthcolumn 3 days ago 0 replies      
malware injection service lol.
25
madshiva 3 days ago 2 replies      
Hey Tavis,

if you read this, could you tell to Microsoft for fix the issue with definition updates that won't remove after update? The definition kept growing and waste space. (the problem auto solve if the computer is rebooted).

Thanks :)

26
Kenji 3 days ago 1 reply      
Me, almost a year ago:

https://news.ycombinator.com/item?id=12184173

Despite getting all the downvotes, who is looking stupid now?

13
Beware of Transparent Pixels adriancourreges.com
554 points by tsemple  2 days ago   91 comments top 19
1
dahart 2 days ago 1 reply      
Really nice article! Succinctly demonstrates the problem with not using premultiplied alpha.

> As an Artist: Make it Bleed!

> If youre in charge of producing the asset, be defensive and dont trust the programmers or the engine down the line.

If you are an artist working with programmers that can fix the engine, your absolute first choice should be to ask them to fix the blending so they convert your non-premultiplied images into premultiplied images before rendering them!

Do not start bleeding your mattes manually if you have any say in the matter at all, that doesn't solve the whole problem, and it sets you up for future pain. The only right answer is for the programmers to use premultiplied images. What if someone decides to blur your bled transparent image? It will break. (And there are multiple valid reasons this might happen without your input.)

Even if you have no control over the engine, file a bug report. But in that case, go ahead and bleed your transparent images manually & do whatever you have to, to get your work done.

Eric Haines wrote a more technical piece on this problem that elaborates on the other issues besides halo-ing:

http://www.realtimerendering.com/blog/gpus-prefer-premultipl...

2
tantalor 2 days ago 4 replies      
Reminds me of "Is there a reason Hillary Clinton's logo has hidden notches?"https://graphicdesign.stackexchange.com/questions/73601/is-t...
3
dvt 2 days ago 2 replies      
> Even with an alpha of 0, a pixel still has some RGB color value associated with it.

Wish the article was more clear as to why this happens. Let me elucidate: this happens because, per the PNG standard[0], 0-alpha pixels have their color technically undefined. This means that image editors can use these values (e.g. XX XX XX 00) for whatever -- generally some way of optimizing, or, more often than not, just garbage. There are ways to get around this by using an actual alpha channel in Photoshop[1], or by using certain flags in imagemagick[2].

[0] https://www.w3.org/TR/PNG/

[1] https://feedback.photoshop.com/photoshop_family/topics/png-t...

[2] http://www.imagemagick.org/discourse-server/viewtopic.php?t=...

4
fnayr 2 days ago 3 replies      
This is extremely useful to take advantage of (that you can store RGB values in 0-alpha pixels). I've written some pretty simple but powerful shaders for a game I'm working on by utilizing transparent pixels' "extra storage" which allowed for either neat visuals or greatly reduced the number of images required to achieve a certain affect. For instance, I wrote a shader for a characters hair that had source images colorized in pure R, G, and B and then mapped those to a set of three colors defining a "hair color" (e.g. R=dark brown, G=light brown, B=brown). If I didn't have the transparent pixels storing rgb nonzero values, the blending between pixels within the image would jagged and the approach would have been unacceptable for production quality leading to each hair style being exported in each hair color. As a total side note I really enjoyed the markup on the website. Seeing the matrices colored to represent their component color value is really helpful for understanding. Nice job author!
5
modeless 2 days ago 2 replies      
I don't like this article because it blames the wrong people and buries the real solution, premultiplied alpha, at the bottom. Already there are many comments here that are confused because they didn't even see the premultiplied alpha part of the article.

The issue with the Limbo logo was not that the source image was incorrect. The image was fine. The blending was incorrect because the PS3 XMB has a bug. Not using premultiplied alpha when you are doing texture filtering is a bug.

6
VikingCoder 2 days ago 4 replies      
Premultiplied alpha results in less color depth, though. If my alpha is 10%, then my possible RGB values become 0-25. Even if I multiply by 10, I still lose the maximum possible values 251-255, and only values 0, 10, 20, 30... 250, are possible.

The correct solution is to pay close attention to all of the factors... and to be ESPECIALLY aware of pixel scaling. Provide your RGBA textures at the 1:1 pixel scale they will be rendered (or higher!) if at all possible.

7
jamesbowman 2 days ago 1 reply      
Using premultiplied alpha avoids this. Jim Blinn's books from the 90s give a very thoughtful treatment of the topic.
8
Kenji 2 days ago 1 reply      
You also have a similar problem when you render opaque, rectangular images without the clamp edge mode, and the renderer is in tiling mode, so the borders wrap around when your picture is halfway between pixels and become a mix between the top/bottom or left/right colour, corrupting the edges. Easy to fix, but annoying until you get what it is that corrupts your edges.

Also: "The original color can still be retrieved easily: dividing by alpha will reverse the transformation."

C'mon, you can't say that and then make an example with alpha=0. Do you want me to divide by zero? The ability to store values in completely transparent pixels is lost.

9
jayshua 2 days ago 3 replies      
While reading this article, it struck me that the amount of "useless" data increases as the alpha value approaches 0. For example: in a pixel with rgba values of (1.0, 0.4, 0.5, 0.0), the rgb values are redundant. Is there a color format that would prevent this redundancy? Perhaps by some clever equation that incorporates the alpha values into the rgb values? I don't think Premultiplied alpha would work, because you still need to store the alpha value for compositing later...
10
br1 1 day ago 0 replies      
I'm surprised that John Carmack seems not to use premultiplied alpha and recommends bleeding: https://www.facebook.com/permalink.php?story_fbid=1818885715...
11
panic 2 days ago 0 replies      
Premultiplied alpha is also more "correct" in that it separates how much each pixel covers things behind it (the alpha value) from the amount of light it is reflecting or emitting (the color values). These two values should really be interpolated separately, and that's what premultiplied alpha gives you.
12
eukara 13 hours ago 0 replies      
Used to experience this all the time when making maps with custom textures for older games... Lots of people sure didn't though. Especially source ports that would apply filtering to games that didn't have any in the first place and you'd see blue or purple outlines because their original formats were obviously paletted
13
Kiro 2 days ago 6 replies      
> pay attention to what color you put inside the transparent pixels

I don't understand this. When I make transparency I don't use any color? I use the Eraser tool or Ctrl-X, not a color with 0 opacity.

14
blauditore 1 day ago 0 replies      
This is also relevant for CSS: Some browsers (I think Safari) treat "transparent" as "transparent black" for gradiets, so "linear-gradient(transparent, white)" will result in unexpected grayish parts in the gradient. As a workaround, one needs to use "linear-gradient(rgba(255, 255, 255, 0), white)" instead.
15
leni536 1 day ago 0 replies      
Note that SVG 1.1 doesn't have an option for color interpolation to work in premultiplied/associated alpha. SVG 2 is not finalized though, I added an issue some time ago.

https://github.com/w3c/svgwg/issues/303

It affects gradients, animations and imported+scaled raster images. Maybe other stuff too, I don't know.

16
CurtMonash 1 day ago 0 replies      
I thought this would be about web or email tracking.
17
qwerta 1 day ago 0 replies      
There is also performance overhead.
18
xchip 2 days ago 0 replies      
DR;TL: use premultiplied alpha for transparency
19
ninjakeyboard 2 days ago 0 replies      
s/sawn/swan/
14
Windows 10 on ARM msdn.com
434 points by vlangber  1 day ago   276 comments top 37
1
satysin 1 day ago 6 replies      
Microsoft are seriously killing it in emulation these days.

First I was amazed with them getting Xbox 360 (PPC) games running at full speed or better on the Xbox One (x86) and now we have x86 on ARM.

They have some wizards working on this stuff.

Edit: I wonder if Dave Cutler is involved in this x86 on ARM stuff? I think he was with the Xbox 360/Xbox One.

2
xemoka 1 day ago 1 reply      
This excites me. I work with a lot of geospatial industry tools, many mobile tools are still based on old window CE/pocket PC/windows mobile platforms; I hope to see these snapdragon devices overtake this area.

Mobile data collection tools on iOS and android devices are rather poor (at least open source ones), hopefully this will help solve this problemwe'll just be able to use windows applications that work and are better developed. The windows ecosystem still seems easier to me than iOS / Android, perhaps that's just my bias or that I see more development options.

This could be a huge turning point for Microsoft's mobile divisions.

x86 drivers are still something I'm curious about here...

[edit: I should also mention, the reason why geospatial tools comes into play here is because the snapdragon CPUs have integrated GSM and GNSS (GPS) on the die. Currently, x86 based tablets have to use a separate component, and many don't come with it or even provide GNSS as an option]

3
wand3r 1 day ago 4 replies      
I am seriously impressed with Microsoft. I haven't used windows in years but they are releasing tons of useful things. VSCode is great on Ubuntu and every version of OS X I have run it on. Plus tons of other cool things like bash on win; etc.

Also, I have a bizspark Azure subscription and it's not a perfect UX; but man does it beat AWS

4
poizan42 1 day ago 4 replies      
For any Microsoft insiders with knowledge about this, a few questions:

1. In the talk they claim that they get "near native" speed from the translation. Can we have some real number of what can be expected? What about warm up time?

2. Is the x86 translation layer only available in user space, or can x86 drivers be loaded for hardware that doesn't have arm drivers yet (or the manufacturer doesn't care about making them)

3. Are there any plans in the future for supporting x64 code as well in the translation layer?

Edit: One more:

4. Are there any plans on supporting Arm v7 in the future, e.g. with Surface/Surface 2 support?

5
daburninatorrr 1 day ago 3 replies      
Man, that League of Legends desktop icon on the test machine is such a tease. I would love to know what the status of x86 gaming on ARM devices would be, especially for something competitive that runs on most anything like LoL
6
niftich 1 day ago 0 replies      
Does this emulation work for x86-64 executables too?

EDIT: no [1].

[1] https://www.theverge.com/2016/12/7/13866936/microsoft-window...

7
drewg123 1 day ago 2 replies      
I'm surprised that nobody has mentioned FX!32. That was the binary translation layer that Windows NT on DEC Alpha used back in 1996 or so to run x86 Windows binaries. There is an old Digital Technical Journal article about it here: http://www.hpl.hp.com/hpjournal/dtj/vol9num1/vol9num1art1.pd...

Everything old is new again!

8
derefr 1 day ago 1 reply      
Odd thought: the existence of an ARM Windows, makes it much simpler for Apple to ship ARM PCs.

The macOS development stack has been re-tooled to output LLVM bitcode within its "fat" binaries for a good while now. It'd be very simple for Apple to throw the switch on a compile farm ala the one Google has for Android APKs, and suddenly have ARM downloads for everything on the Mac App Store (without requiring any re-submissions.) Which means it wouldn't be hard at all for Apple to ship a "functional" ARM macOS computer... just, until now, such a machine wouldn't have had a very good Windows story. No Boot Camp, no cheap virtualization, etc.

Suddenly, that story is a solved problem.

9
sxates 1 day ago 2 replies      
There have been rumors of a 'surface phone' that runs Win 10 (not mobile) for a couple years now. Most expected some kind of future Intel chip to enable that, but ARM support certainly opens up the possibility. Just what Windows phones need - another OS change!
10
pjc50 1 day ago 2 replies      
Interesting - something they should have done rather than the orphan WindowsRT, in my opinion.

I wonder if this ends up being the inheritor of Windows CE for embedded ARM-flavoured devices. I also wonder if this means that the Win10 ARM kernel has the full Windows API - so if you built an ARM PE executable that could run natively. I suspect 95% of the pieces are in place for that but it's not yet been productised.

11
ChuckMcM 1 day ago 1 reply      
Really interesting.

I can see a potential milestone Windows 10 S on an Arm V8 based laptop with all day battery+. Sort of a Surface RT but without any excuses.

If Apple follows suit and creates a light weight, network centric laptop experience around iOS, then you'll have three contenders for the 'appliance' environment, ChromeOS, Windows 10 S, and iOS. Each with their own 'laptop' design ethic, Pixel, Surface, MacBook.

12
faragon 1 day ago 4 replies      
If that runs great on phones, and has virtualization support for e.g. running ARM Linux guests, I would change both my Android phone and my Ubuntu laptop for one phone (e.g. Snapdragon 835 or better + 8-12GB RAM + 256GB flash + USB 3.0 OTG), using a dummy screen + keyboard as laptop replacement.
13
fnord123 1 day ago 5 replies      
They are doing this because they want the benefits of great ARM battery life. But they are emulating x86 on ARM. So they believe that emulated x86 on ARM will have a better battery life than native x86. The only way I think this could be possible is if they measure battery life of an idle device. Maybe someone can disabuse me of this belief.
14
kev009 1 day ago 2 replies      
Microsoft is one of the few companies that really grok Systems Software right now.
15
0x0 1 day ago 2 replies      
Does the x86/x64 cpu emulation work for JIT code or other self-modifying code? (Copy protected games, for example?) In the linked presentation they just briefly mention doing a load time(?) transpile of x86 to arm64 and caching the result on disk. What about exe packers that map memory rw then rx after unpacking, such as UPX?
16
mmcconnell1618 1 day ago 1 reply      
Windows 10 on ARM would allow Apple Bootcamp on ARM-based MacBooks. There have been rumors about Apple moving to ARM and potentially bringing processors in-house. I wonder if that influenced Microsoft's thinking.
17
israrkhan 1 day ago 0 replies      
This is how Windows RT should have been in the first place.
18
wolfgke 1 day ago 2 replies      
Does this mean that Windows 10 (not Windows 10 IoT Core) will also come to Raspberry Pi 3?
19
ericfrederich 12 hours ago 0 replies      
If they're supporting x86 emulation to run desktop apps will they support compiling native desktop ARM apps?

They're essentially supporting the end result, I would hope they don't force developers to go through emulation to try to encourage them to write a universal windows app

20
mataug 1 day ago 0 replies      
Wow, This is brilliant. Microsoft is making leaps and bounds improvement to keep their consumer base happy. Almost ten years after switching to Linux, I think I might gladly use a windows machine as a second PC.
21
yuhong 1 day ago 0 replies      
I hope that recompiled desktop apps will be allowed in addition to emulation.
22
angryteabag 19 hours ago 0 replies      
I dont know much about where to find these chip or what I am looking for but...

http://i.imgur.com/mRdk6ZD.jpg

This is an ancient chip, however I also found MediaTek ones for ~ $10, which had 4 cores and were 1.6ghz SoC.

Sounds to me like we could seriously see $100 laptops that actually function soon.

23
wfunction 1 day ago 1 reply      
Can someone explain how they are achieving high execution speed? I would have expected the speeds (or at least the start-up times, if we have dynamic recompilation) for x86 on ARM to be outright abysmal.
24
wfunction 1 day ago 0 replies      
Anybody know when we can expect a phone out that can emulate x86 Windows 10? End of 2017 is the first estimate I heard but I'm not sure how realistic that is.
25
Keyframe 1 day ago 0 replies      
That company could do so much impact if only they had laser focus and distribution execution. Still so much talent with them.
26
cptskippy 1 day ago 1 reply      
That's a Microsoft Intellimouse Explorer Mouse and a Belkin USB 2.0 hub. They're like 10 years old.
27
kabdib 1 day ago 0 replies      
Props to D.M. for the pivotal work. You know who you are :-)

-- ST boot sector guy

28
mtgx 1 day ago 3 replies      
HN is probably not the right target for this type of device. Either way we need Windows 10 on ARM to succeed for Intel to have more competition. I also do think it will be successful if it works at least as well as Intel's Atom-based Celeron and Pentium laptop chips, especially in emerging markets.

Intel shot itself in the foot by replacing the Core architecture in mobile Celerons and Pentiums with Atom, and also by starting to rename lower performance Core M chips to Core i3 and Core i5. This will make it easier for ARM and AMD to "catch-up" and even beat Intel at these levels, because Intel got greedy and tried to trick the market with lower-performing chips at the same price points as for previous (and more powerful) generations.

29
nickhalfasleep 1 day ago 3 replies      
Great potential for a Win10 phablet I can develop on that also lets me make phonecalls.Not for big projects, but an anywhere device. Does Win10 support non VoIP calls?
30
asveikau 1 day ago 3 replies      
Emulation should be a last resort. They should ship an SDK with all the proper import libs and a cl.exe etc. that can target proper Win32 on ARM.
31
Skywing 1 day ago 2 replies      
I'm out of the loop on ARM. Can somebody explain the significance of it?
32
platipuss 1 day ago 2 replies      
I hope they drop this on the surface 2. It would save it from being a paper weight.
33
0xFFC 1 day ago 2 replies      
Please correct me If I am wrong. But I think with WSL and this, Microsoft proved they are much much more superior than Google and Apple when it comes to serious system software development (they have to be, they have developed one of the most complex Kernel of the all times and maintained and improved it for decades. Yes I am linux guy too. But Widnows Kernel is extremely complex and well architectured piece of software ever written). Yes, Google does have very good applications, apple does have too.

But when it comes to hardcore system software development stuff. They are unbeatable. Emulating entire x86 on ARM? This is mind blowing.

They have pulled good emulation before too (one I think was inside XBOX).

But this is going to be serious. I would say very serious.

Extremely good battery life would give Microsoft, very good edge over Apple.

1) Qualcomm will be the winner. and so other ARM CPU manufacturer. and Intel is the biggest loser here.

2) Microsoft will hit market with this. After this they will have extremely good position to release good phone, and here we stand, I see successful future for Windows Phone.

3) From Computer Architecture perspective, the bottleneck is not memory or CPU speed. It is IO and energy. I don't know how this will impact on IO. But I am almost sure this is unbeatable from energy efficiency, and when I (and almost all people I know) buying a laptop. battery life is almost the most important aspect of it.

34
mikerg87 1 day ago 0 replies      
Any word on actual devices that can run this for sale. Configurations ?
35
Dolores12 1 day ago 1 reply      
Ha-ha, Microsoft will do it again. For Windows on all devices!
36
Zigurd 1 day ago 0 replies      
Strategically interesting because, up to now, the Dalvik bytecode runtime and the subsequent pre-compiling ART runtimes were the best way to do ISA-independent application environments. Apple's fat binaries got in early enough in their (smaller) developer community for Apple to avoid being tied to an ISA. Microsoft was handcuffed to an x86 legacy boat anchor. But now they have escaped, and those other approaches are less of an advantage.
37
d9 1 day ago 2 replies      
What about .Net on ARM?
15
The tragedy of 100% code coverage (2016) ig.com
528 points by tdurden  4 days ago   339 comments top 71
1
cbanek 4 days ago 12 replies      
I've had to work on mission critical projects with 100% code coverage (or people striving for it). The real tragedy isn't mentioned though - even if you do all the work, and cover every line in a test, unless you cover 100% of your underlying dependencies, and cover all your inputs, you're still not covering all the cases.

Just because you ran a function or ran a line doesn't mean it will work for the range of inputs you are allowing. If your function that you are running coverage on calls into the OS or a dependency, you also have to be ready for whatever that might return.

Therefore you can't tell if your code is right just by having run it. Worse, you might be lulled into a false sense of security by saying it works because that line is "covered by testing".

The real answer is to be smart, pick the right kind of testing at the right level to get the most bang for your buck. Unit test your complex logic. Stress test your locking, threading, perf, and io. Integration test your services.

2
mannykannot 3 days ago 2 replies      
There are a few relevant facts that should be known to everyone (including managers) involved in software development, but which probably are not:

1) 100% path coverage is not even close to exhaustively checking the full set of states and state transitions of any usefully large program.

2) If, furthermore, you have concurrency, the possible interleavings of thread execution blow up the already-huge number of cases from 1) to the point where the latter look tiny in comparison.

3) From 1) and 2), it is completely infeasible to exhaustively test a system of any significant size.

The corollary of 3) is that you cannot avoid being selective about what you test for, so the question becomes, do you want that decision to be an informed one, or will you allow it to be decided by default, as a consequence of your choice to aim for a specific percentage of path coverage?

For example, there are likely to many things that could be unit-tested for, but which could be ruled out as possibilities by tests at a higher level of abstraction. In that case, time spent on the unit tests could probably be better spent elsewhere, especially if (as with some examples from the article) a bug is not likely.

100% path coverage is one of those measures that are superficially attractive for their apparent objectivity and relative ease of measuring, but which don't actually tell you as much as they seem to. Additionally, in this case, the 100% part could be mistaken for a meaningful guarantee of something worthwhile.

3
iamleppert 3 days ago 15 replies      
The worse the developer, the more tests he'll write.

Instead of writing clean code that makes sense and is easy to reason about, he will write long-winded, poorly abstracted, weird code that is prone to breaking without an extensive "test suite" to hold the madness together and god forbid raise an alert when some unexpected file over here breaks a function over there.

Tests will be poorly written, pointless, and give an overall false sense of security to the next sap who breaths a sigh of relief when "nothing is broken". Of course, that house of cards will come down the first time something is in fact broken.

I've worked in plenty of those environments, where there was a test suite, but it couldn't be trusted. In fact, more often than not that is the case. The developers are a constant slave to it, patching it up; keeping it all lubed up. It's like the salt and pepper on a shit cake.

Testing what you do and developing ways to ensure its reliable, fault-tolerant and maintainable should be part of your ethos as a software developer.

But being pedantic about unit tests, chasing after pointless numbers and being obsessed with a certain kind of code is the hallmark of a fool.

4
mikestew 4 days ago 2 replies      
The tragedy of 100% code coverage is that it's a poor ROI. One of things that stuck with me going on twenty years later is something from an IBM study that said 70% is where the biggest bang-for-the-buck is. Now maybe you might convince me that something like Ruby needs 100% coverage, and I'd agree with you since some typing errors (for example) are only going to come up at runtime. But a compiled (for some definition of "compiled") language? Meh, you don't need to check every use of a variable at runtime to make sure the data types didn't go haywire.

The real Real Tragedy of 100% coverage is the number of shops who think they're done testing when they hit 100%. I've heard words to that effect out of the mouth of a test manager at Microsoft, as one example. No, code coverage is a metric, not the metric. Code coverage doesn't catch the bugs caused by the code you didn't write but should have, for example. Merely executing code is a simplistic test at best.

5
algesten 4 days ago 3 replies      
My main issue with unit testing is what defines a unit?

Throughout my career I find tests that tests the very lowest implementation detail, like private helper methods, and even though a project can achieve 100% coverage it still is no help avoiding bugs or regression.

Given a micro service architecture I now advocate treating each service as a black box and focus on writing tests for the boundaries of that box.

That way tests actually assist with refactoring rather than be something that just exactly follows the code and breaks whenever a minor internal detail changes.

However occasionally I do find it helpful map out all input/output for an internal function to cover all edge cases. But that's an exception.

6
xg15 3 days ago 0 replies      
I agree (mostly) with the authors standpoints, but his arguments to get there are not convincing:

> You don't need to test that. [...] The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code.

The code invokes a user-passed callback to register another callback and specifies some internal logic if that callback is invoked. I personally don't find that obvious at all.

Others may find it obvious. That's why I think, if you start with the notion "this is necessary to test, that isn't", you need to define some objective criteria when things should be tested. Relying on your own gut feeling (or expecting that everyone else magically has the same gut feeling) is not a good strategy.

If I rewrite some java code from vanilla loops-with-conditionals into a stream/filter/map/collect chain, that might make it more obvious, but it wouldn't suddenly remove the need to test it, would it?

>"But without a test, anybody can come, make a change and break the code!"

>"Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it."

You could make that argument against any kind of automated test. So should we get rid of all kinds of testing?

Besides, the argument doesn't even make sense. No one is using tests as a security feature against "evil" developers (I hope). (One of) the points of tests is to be a safeguard for anyone (including yourself) who might change the code in the future and might not be aware of all the implications of that change. In that scenario, it's very likely you change the code but will have a good look at the failed test before deciding what to do.

7
eksemplar 3 days ago 2 replies      
We've almost stopped unit testing. We still test functionality automatically before releasing anything into production, but we're not doing a unit test in most cases

Our productivity is way up and our failure rates haven't changed. It's increased our time spent debugging, but not by as much as we had estimated that it would.

I won't pretend that's a good decision for everyone. But I do think people take test-driven-development a little too religiously and often forget to ask themselves why they are writing a certain unit test.

I mean, before I was a manager I was a developer and I also went to a university where a professor once told me I had to unit test everything. But then, another professor told me to always use the singleton pattern. These days I view both statements as equally false.

8
penpapersw 4 days ago 2 replies      
I think a bigger epidemic is we're putting too much emphasis on "do this" and "do that" and "if you don't do this then you're a terrible programmer". While that sometimes may be true, much more importantly is to have competent, properly trained professionals, who can reason and think critically about what they're doing, and who have a few years of experience doing this under their belt. Just like other skilled trades, there's a certain kind of knowledge that you can't just explain or distill into a set of rules, you have to just know it. And I see that in the first example in this article, where the junior programmer is writing terrible tests because he just doesn't know why they're bad tests (yet).
9
pg314 3 days ago 1 reply      
The article illustrates what happens when you have inexperienced or poor developers following a management guideline.

To see how 100% coverage testing can lead to great results, have a look at the SQLite project [1].

In my experience, getting to 100% takes a bit of effort. But once you get there it has the advantage that you have a big incentive to keep it there. There is no way to rationalise that a new function doesn't need testing, because that would mess up the coverage. Going from 85% to 84% coverage is much easier to rationalise.

And of course 100% coverage doesn't mean that there are no bugs, but x% coverage means that 100-x% of the code is not even run by the tests. Do you really want your users to be the first ones to execute the code?

As an anecdote, in one project where I set the goal of 100% coverage, there was a bug in literally the last uncovered statement before getting to 100%.

[1] https://www.sqlite.org/testing.html

10
hibikir 4 days ago 1 reply      
I might be completely wrong on this one, but it seems to me that a lot of the precepts of TDD and full code coverage have a lot to do with the tools that were used by some of the people that popularized this.

Some of my day involves writing Ruby. I find using Ruby without 100% code coverage to be like handling a loaded gun: I can track many outages to things as silly as a typo in an error handling branch that went untested. A single execution isn't even enough for me: I need a whole lot of testing on most of the code to be comfortable.

When I write Scala at work instead, I test algorithms, but a big percentage of my code is untested, and it all feels fine, because while not every piece of code that compiles works, the kind of bugs that I worry about are far smaller, especially if my code is type heavy, instead of building Map[String,Map[String,Int]] or anything like that. 100% code coverage in Scala rarely feels as valuable as in Ruby.

Also different styles make the value of having tests as a way to try to force good factoring changes by language and paradigm. Most functional Scala doesn't really need redesigning to make it easy to test: Functions without side effects are easy, and are easier to refactor. A deep Ruby inheritance tree with some unnecessary monkey patching just demands testing in comparison, and writing the tests themselves forces better design.

The author's code is Java, and there 95% of the reason for testing that isn't purely based on business requirements comes from runtime dependency injection systems that want you to put mutability everywhere. Those are reasons why 100% code coverage can still sell in a Java shop (I sure worked in some that used too many of the frameworks popular in the 00s), but in practice, there's many cases where the cost of the test is higher than the possible reward.

So if you ask me, whether 100% code coverage is a good idea or not depends a whole lot on your other tooling, and I think we should be moving towards situations where we want to write fewer tests.

11
userbinator 3 days ago 2 replies      
But remember nothing is free, nothing is a silver bullet. Stop and think.

I'm going to be the one to point at the elephant in the room and say: Java. More precisely, Java's culture. If you ask developers who have been assimilated into a culture of slavish bureaucratic-red-tape adherence to "best practices" and extreme problem-decomposition to step back and ask themselves whether what they're doing makes sense, what else would you expect? These people have been taught --- or perhaps indoctrinated --- that such mindless rule-following is the norm, and to think only about the immediate tiny piece of the whole problem. To ask any more of them is like asking an ostrich to fly.

The method names in the second example are rather WTF-inducing too, but to someone who has only ever been exposed to code like that, it would probably seem normal. (I counted one of them at ~100 characters. It reminds me of http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom... )

Many years ago I briefly worked with enterprise Java, and found this sort of stifling, anti-intellectual atmosphere entirely unbearable.

12
saltedmd5 3 days ago 1 reply      
The big error being made in this article (and most of the comments here) is the assumption that the purpose of unit tests is to "catch bugs." It isn't.

The purpose of unit tests is to document the intended behaviour of a unit/component (which is not necessarily a single function/method in isolation) in such a way that if someone comes along and makes a change that alters specified behaviour, they are aware that they have done so and prevented from shipping that change unless they consciously alter that specification.

And, if you are doing TDD, as a code structure/design aid. But that is tangential to the article.

13
state_less 3 days ago 0 replies      
Unit tests are a poor substitute for correctness. Many unit tests does not a strong argument make.

Unit tests are typically inductive. Developer shows case A, B and C give the expected results for function f. God help us if our expectations are wrong. So, you're saying since A, B and C are correct therefore function f is correct. Well that may be, or maybe A, B and C are trivial cases, in other words, you've made a weak argument.

100% test coverage sounds like lazy management. Alas, the manager may have worked their way via social programming rather than computer programming. In such cases, better to say you have 110% test coverage.

14
circlefavshape 3 days ago 2 replies      
/me raises hand on the pro-testing side

I've been programming for a living since 1996, and only recently started to do TDD in the normal sense of writing unit tests before writing code. I've found to it to be an enormous help with keeping my code simple - the tests or the mocking getting difficult is a great indicator that my code can be simplified or generalised somehow

I argued for functional instead of unit testing for years, but it was only when a team-mate convinced me to try unit testing (and writing the tests FIRST) that the scales fell from my eyes. Unit testing isn't really testing, it's a tool for writing better code.

BTW from an operational perspective I've found it's most effective to insist on 100% coverage, but to use annotations to tell the code coverage tool to ignore stuff the team has actively decided not to test - much easier to pick up the uncovered stuff in code review and come to an agreement on whether it's ok to ignore

15
johnwatson11218 3 days ago 2 replies      
Not sure if this is already mentioned but for me the most concise illustration of this fallacy was in The Pragmatic Programmer book. They had a function like this:

double f( double x ) { return 1/ x; }

They pointed out that it is trivial to get 100% coverage in test cases but unless your tests include passing in 0 as the parameter you are going to miss an error case.

16
coding123 4 days ago 2 replies      
I wish people cared more about the craft of an amazing plugin architecture or an advanced integration between a machine learning system and a UI, but no, more and more of our collective development departments care more about TDD and making sure things look perfect. Don't worry about the fact that there are no integration tests and we keep breaking larger systems, and while there might be 100% code coverage, no developer actually understands the overall system.
17
kabdib 4 days ago 0 replies      
I've seen projects where management had rules like "you must have 70% code coverage before you check in". Which is crazy, for a lot of reasons.

But the developer response in a couple cases was to puff the code up with layers of fluff that just added levels of abstraction that just passed stuff down to the next layer, unchanged, with a bunch of parameter checking at each new level. This had the effect of adding a bunch of code with no chance of failure, artificially increasing the amount of code covered by the tests (which, by the way, were bullshit).

I got to rip all that junk out. It ran faster, was easier to understand and maintain, and I made sure I never, ever worked with the people who wrote that stuff.

18
biztos 3 days ago 1 reply      
A lot of people here seem to have strong opinions against 100% coverage, so I'll risk their ire with my strong opinion in favor.

If you have, say, 95% coverage -- and most corporate dev orgs would be thrilled with that number -- and then you commit some new code (with tests) and are still at 95%, you don't know anything about your new code's coverage until you dig into the coverage report. Because your changes could have had 100% coverage of your new thing but masked a path that was previously tested; or had 10% but exercised some of the previously missing 5%.

If you have 100% coverage and you stay at 100% then you know the coverage of your new code: it's 100%. Among other things this lets you use a fall in coverage as a trigger: to block a merge, to go read a coverage report, whatever you think it warrants.

Also, as has been noted elsewhere, anything other than a 100% goal means somebody decides what's "worth" testing... and then you have either unpredictable behavior (what's obvious to whom?) or a set of policies about it, which can quickly become more onerous than a goal of 100%.

It's important to remember that the 100% goal isn't going to save you from bad tests or bad code. It's possible to cheat on the testing as well, and tests need code review too. There's no magic bullet, you still need people who care about their work.

I realize this might not work everywhere, but what I shoot for is 100% coverage using only the public API, with heavy use of mock classes and objects for anything not directly under test and/or not stable in real life. If we can't exercise the code through the public API's then it usually turns out we either didn't rig up the tests right, or the code itself is poorly designed. Fixing either or both is always a good thing.

I don't always hit the 100% goal, especially with legacy code. But it remains the goal, and I haven't seen any convincing arguments against it yet.

Open the flame bay doors, Hal... :-)

19
devrandomguy 3 days ago 2 replies      
If you can prove that your testing process is perfect, then your entire development process can then be reduced to the following, after the test suite is written:

 cat /dev/random | ./build-inline.sh | ./test-inline.sh | tee ./src/blob.c && git commit -Am "I have no idea how this works, but I am certain that it works perfectly, see you all on Monday!" && git push production master --force
When presented like this, relying on human intelligence and experience doesn't seem like such a bad thing after all.

Just so we're clear, my username was not inspired by this scheme.

20
apo 3 days ago 0 replies      
You don't need to test that. ... The code is obvious. There are no conditionals, no loops, no transformations, nothing. The code is just a little bit of plain old glue code.

Here's the code:

 @Override public void initialize(WatchlistDao watchlistDao) { watchlistDao.loadAll(watchListRow -> watchlists.add(watchListRow)); }
Maybe I'm dense, but this code raises at least one question that I would prefer to see answered by tests.

The parameter watchlists appears to be defined in a scope above the one under test. What happens if watchlists is null for some reason? What should be the behavior?

Then there's the tricky question of what to do as this method evolves. Next month, a watchListRow might need to be updated with a value before being added to watchlists. Later, a check might be added to ensure some property exists on watchListRow. At what point will a test be written for this method?

21
jowiar 3 days ago 0 replies      
One of the pressures for 100% coverage is working in a non-typesafe language. The gospel of coverage largely evolved in the Ruby community, where I often see test suites that look like a handrolled typechecker.
22
jganetsk 3 days ago 1 reply      
He's right, but he's conflating 100% code coverage with using mocks with writing tests.

Always write tests. And strive for maxmium coverage. But make sure you write the right kinds of tests:

- Don't overuse mocks. Mocks don't represent real conditions you would actually see in production. Favor using real dependencies over mocks.

- Don't overspecify your tests. Test only publicly specified parts of the contract. Things that you need to be true and that the callers of the module expect to be true. And yes, you will change the test when the contract changes.

23
WalterBright 3 days ago 1 reply      
Of course any metric can be rendered useless if one "works the metric" rather than the intent of the metric.

But in my experience, code covering unit tests have correlated strongly with faster development and far fewer bugs being uncovered in the field.

24
lowbloodsugar 3 days ago 1 reply      
I once joined a company that had 90% code coverage. After a while it became clear that there were all vanity tests: I could delete huge swathes of code with zero test failure. We let the contractors that wrote it move on, and we formed a solid team in house. We don't run code coverage any more because it makes the build run four times slower. Instead, I trust our teams to write the good tests. Sometimes that means <100% coverage, and the teams are able to justify it.

Some feedback on the article:

>Test-driven development, or as it used to be called: test-first approach

Test-first is not the same as Test-Driven. The test-first approach includes situations where a QA dev writes 20 tests, and then hands them to an engineer who implements them. Thats not TDD.

>"But my boss expects me to write test for all classes," he replied.

That's very unlikely to be TDD. "Writing tests because I've been told to" is never likely to be "I'm writing the tests that I know to be necessary", and that's all TDD is: writing necessary tests. If the test isn't necessary, then neither is the code.

>Look, if that imaginary evil/clueless developer comes and breaks that simple code, what do you think he will do if a related unit test breaks? He will just delete it.

Sure. But then their name is on that act in the commit log. The test is a warning. I've been lucky not to have worked with evil developers, but I have worked with some clueless ones, and indeed some have just deleted tests. Thats an opportunity for education, and quality has steadily improved.

>The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.

Totally agree. So many programmers and teams practice cargo cult behaviors. Unfortunately, this article is one of them: making claims about TDD, and unit tests in general, without understanding "why" TDD is effective.

25
djtriptych 4 days ago 0 replies      
My version of this is working on a team with 100% coverage that still saw a steady and heavy influx of bugs. 100% coverage does not mean bug free.

I advocate spending time on identifying/inventing the correct abstractions over coverage.

26
dcw303 4 days ago 0 replies      
I think you should write a test.

Naming the test just "initialise" is not very useful as it doesn't assert what you expect the method under test to do. Given that the purpose of the initialise function is to populate a watchlists collection variable from the parameter, i'd name the test something like "initialise_daoRecordCountIs9_watchlistCountIs9". The pattern I generally use is <method_name>_<assertion_under_test>_<expected_result>.

Then, my test would be the following:

* Set up / mock the dao parameter to have 9 rows

* Create an instance of the class under test and push in the dao parameter

* Verify / Assert that the class under test now has 9 items in the watchlists variable - I'm assuming there is a public method to access that.

27
xutopia 3 days ago 1 reply      
The tragedy he mentions aren't 100% code coverage. It's more about people using the wrong tools for the job and using 100% coverage as an indication that everything is fine.

100% coverage for my team means that we were intentional about our code. It's not hard at all to have 100% coverage in a Ruby application as it is possible to do a lot with very little code.

Furthermore it allows us to bring in a junior on the team because we know they have a safety net.

Also for the record we do code reviews and are very thoughtful about the code we write. 100% coverage does not stop the possibility of some bugs inserting themselves somewhere.

28
lacampbell 3 days ago 0 replies      
I feel like this high test coverage thing can only work if you have tight modules, tight interfaces, and you only bother testing at module boundaries. So the test cases almost function as a bit of executable API documentation - here's the method name, here's what it does, here's the contracts and/or static types, and.... given this input, you should get this output.

Do it for the high level bits you actually expose. If you're exposing everything, tests won't really save you - architecture and modularity are more fundamental and should be tackled first. If you're writing a big ball of mud, what benefit do you get testing a mudball?

29
ajmurmann 4 days ago 0 replies      
100% code coverage and even TDD'd doesn't and shouldn't mean 100% unit tested. Glue code and declaration doesn't need a unit test. Some functional tests should provide all the coverage needed to give you confidence to refractor that code in the future.

Edit: while I'm a huge TDD advocate, I'm not a big advocate of measuring code coverage. That should only be necessary if you are trying to get a code base under coverage that wasn't TDD'd. Even then I'd rather add the coverage as I'm touching uncovered code. If it works and I'm not touching it, it doesn't need tests.

30
brlewis 4 days ago 0 replies      
There's a human tendency to overemphasize things you can quantify. So we try to figure out how to test every code path rather than what we should do: try to figure out which inputs we should test against.
31
koonsolo 3 days ago 1 reply      
I use the following list to decide on creating a unit test or not. More yeses means a unit test is a good idea.

1. Is it hard to instantly test the code when implementing it? (Might be the case for library code)

2. Is there a chance the underlying implementation might change (and so might break in the future)?

3. Will the interface of the class remain stable? (If not unit test needs to be rewritten too)

4. Will functional tests pass when something breaks in this class?

32
antirez 3 days ago 0 replies      
Code coverage is an illusion, since what you want is actually "possible states coverage". You can cover all the lines of your code and still cover a minority of the possible states of the program, and especially a minority of the most probable states of the program when the actual users execute it, or you can cover 50% of your lines of code and yet cover much more real world states, and states for which it is more likely to find a bug. I think that more than stressing single features with unit tests, it is more useful to write higher level stress tests (fuzz tests, basically) that have the effect of testing lines of code as a side effect of exploring many states of the program. Specific unit tests are still useful, but mostly in order to ensure that edge cases and main normal behavior corresponds to the specification. As in everything it is develooper's sensibility that should drive what test to write.
33
jondubois 3 days ago 0 replies      
Agreed. I would rather have 5% test coverage that checks against all risky edge cases/inputs than 100% test coverage that checks against arbitrary, low-risk inputs.

Writing tests to confirm the simplest, most predictable use cases is a waste of time - Those cases can be figured out very quickly without automated testing because they are trivial to reproduce manually.

34
Ace17 3 days ago 0 replies      
Having 100% code coverage is like having 0 warnings (although it certainly is a lot harder).In this situation, your tools are not telling you "all's good", but rather "I can't detect anything suspect here".

There's a good chance that the dev time needed to go from 90% coverage to 100% coverage might be better spent somewhere else.

35
DTrejo 4 days ago 0 replies      
Has anyone seen a fuzzer that creates variants based on a test suite with 100% coverage? Hmm... the fuzzer still wouldn't necessarily know how to create the correct invariants. #lazyweb
36
rumcajz 3 days ago 0 replies      
I've seen a project with 100% unit test coverage, yet no e2e tests. Nobody knew whether the product worked at all.
37
yoav_hollander 3 days ago 0 replies      
One point already made by several people on this thread is that code coverage, while helpful, is not enough (and perhaps is not even the best bang for the buck).

In hardware verification (where I come from, and where the cost of bugs is usually higher), "functional coverage" is considered more important. This is usually achieved via constraint-based randomization (somewhat similar in spirit to QuickCheck, already mentioned in this thread).

I tried to cover (ahem) this whole how-to-use-and-improve-coverage topic in the following post: https://blog.foretellix.com/2016/12/23/verification-coverage...

38
zxcmx 4 days ago 1 reply      
I wish more people cared about path coverage as opposed to "line coverage".
39
hultner 3 days ago 1 reply      
Back when I learnt Haskell we had a lecturer named John Hughes who had co-authored a tool named QuickCheck[1]. We used this tool extensively throughout the course, with it testing were quite simple and writing elegant generators were a breeze. In my experience, these test did a much more greater job at finding edge cases then many unit tests I've seen in larger close to full coverage TDD projects.

As with much else TDD should be a tool with the ultimate goal of aiding us in writing correct and less bug riddled code, once the tool adds more work it's no longer offering much aid.

[1] . https://en.wikipedia.org/wiki/QuickCheck

40
waibelp 3 days ago 0 replies      
This remembers me of recent projects where developers started to mock every piece of code.. The result was that all tests passed while the codebase exploded in real environments.

In my opinion the best advice is to force developers to use their brains. I know, there are a lot of sh*tty CTO/CEO/HoIT/SomeOther"Important"Position people out there seeing them as code monkeys and saying that developers are not paid to think but in that case the best thing developers could do is learn to say "NO"... My experience with that kind of people is that they need to learn the meaning of "NO" instead of wasting time and money in the end of the day.

41
elchief 3 days ago 1 reply      
What you actually want to do is test the methods with the highest cyclomatic complexity first (where it's greater than 1)

IntelliJ has a plugin

42
keithnz 3 days ago 0 replies      
I've heard Kent Beck talk about having the smallest amount of tests that give you confidence.

Which he also gave as an answer here:

http://stackoverflow.com/questions/153234/how-deep-are-your-...

But I know a lot of people in the early days of XP went to extremes, 100% code coverage, mutation tools for every condition to ensure unit tests broke in expected ways, etc. But they were more experiments in pushing the limits rather than things that gave productivity gains.

43
ioquatix 4 days ago 1 reply      
I have 100% code coverage on a couple of projects. It has two benefits:

Behaviour is completely covered by tests, so changes in APIs which might break consumers of the library will at least be detected.

New work on the library tends to follow the 100% coverage by convention, so it's somewhat easier to maintain. Apps that have 90% coverage, for example, tend to slip and slide around. Having 100% coverage projects the standard "If your contribution doesn't have 100% coverage it won't be accepted". I don't think this is a bad default position.

44
falcolas 3 days ago 0 replies      
IMO, if I ever have 100% code coverage, I did something wrong. The best I can usually achieve is 95-98%, because of my defensive coding to warn about the "impossible" use cases.

Escape a `while True` loop? Log it, along with the current state of the program, and blow up (so we can be restarted). Memory allocation error? Log it. The big "unexpected exception" clause around my main function? Log it.

If I do hit those in testing, my code is wrong.

45
rothron 3 days ago 1 reply      
I don't think I know anyone that do TDD. Uncle Bob has indoctrinated a few zealots into that mindset, but it all comes off as crazy to me. A germ of a good idea taken way too far.

People of that school tend to write tests that test implementation rather than functionality. As a result you get fragile tests that break not telling you what went wrong but how the implementation has changed.

Good tests should test behavior. A change in implementation shouldn't break the test.

46
xmatos 3 days ago 1 reply      
Build tests against your app`s public interface. On a web app, that would be your controllers or API.

That will give you good coverage, while avoiding too simple to be useful unit tests.

It's really hard to foresee all possible input variations and business logic validations, but that doesn't mean your test suite is useless.

It just means it will grow everytime you find a new bug and you are guaranteed that one won't happen again...

47
vinceguidry 3 days ago 0 replies      
I noticed the author was speechless in two situations, both of which involved "but we write all our tests in <test-framework>." This is legitimate and should be taken more seriously by the author.

Codebases serve businesses and businesses value legibility over efficacy. It's more important to them to have control over their assets than to have better assets. Using one test framework is in perfect service of that goal.

It's inefficient in that it will take future developers more time to understand that code. But fewer architectural elements means that you can get by with less senior programmers.

Imagine if you went onto a software project and they were using 6 different databases because every time they had a new kind of data that they wanted to access differently, they reached for another database rather than use the one they had.

Of course nobody would ever do that, well I hope anyway, but I do see a lot of unnecessary architectural complication in projects in service of "using the right tool for the job." And it can balloon. A new test framework has to work in your CI framework. You need to decide how to handle data. It's not a huge decision, but it's more complicated then most devs would think and it'll take up more of your time than you'll expect.

You can generalize this to the the main thrust of the article. 100% code coverage is not a bad goal to want to hit. Sure, you're going to get a lot of waste. But you're not paying for it, your employer is. And your employer might have a different idea of which side of the tradeoff he wants to be on and where to draw the line. You know the code way better than they will, but they know the economics far better than you ever could.

48
michaelfeathers 3 days ago 1 reply      
Coverage isn't the goal. The goal is understanding.

Write a test if you don't feel confident that a piece of code does what you think it does. If you're not sure what it does now, there's little chance that you or anyone else will in the future, so write a test to understand it and to make that understanding explicit.

Use curiosity as a driver.

49
Dove 3 days ago 0 replies      
Automated tests are code, and come with all the engineering and maintenance concerns of 'real' code. They don't do anything for your customers, though, so are only appropriate when they actually make your work faster or safer.

Automated tests are a spec, and are exactly as hard to write completely and correctly, and as easy to get wrong in ignorance, as a 'real' spec. If you find them easy to write, odds are good you would find the code easy to visually verify as well - which is to say, you're working on a trivial problem.

They have their place, but that place is not everywhere. It is where they are efficient and valuable. I particularly look for places where they are like the P half of an NP problem, an independent estimate of the answer to a math problem. If you ever find yourself writing the same code twice, unless it's a safety-critical system or something, that's a moment to stop and reflect on the value of what you are doing.

50
mirko22 3 days ago 0 replies      
The title does not mean anything and is basically a click bait in my opinion as it sounds cool to trash some ideal.That said, the magic 100% number is far removed from reality and does not represent anything by itself.

100% coverage on project of which size? Imagine you have a single script project that does exactly one thing and 2 test are enough to verify that it works without doing it manually? That is not the same as writing test for file system or tests which consists mostly of mocks upon mocks upon mocks.

I think the real problem is someone comes up with an idea, like TDD, tells people about it, some people hear about, start preaching it, some people start believing it and nobody actually think things through, usually cos they don't have experience (it's not a fetish as someone said). Like everything in life, you have to think things through before doing them, ask your self is this worth doing and when it is worth doing. You can't just say: "Oh we are doing TDD thus everything must be done in TDD way".

For people that say tests are useless, or good code does not need tests, I ask, when you make a change do you still make sure your code works by hand? And if you do make sure, why don't you automate that? You are a programmer after all.

And for those that say you need to test everything, well you don't, specially if you need mock most of it or it is really not that important piece of code as it is dev tool or something. What you want to make sure works is customer/user facing stuff that must work for you to get paid and you want to be able to verify this at any time of day without losing hours clicking around checking for stuff.

So this is not straight forward, 100% means nothing without context and doing anything in excess and without valid reasons is pointless or even harmful. And this has nothing to do with programming but life in general.

51
ishtu 3 days ago 0 replies      
>Testing is usually regarded as an important stage of the software development cycle. Testing will never be a substitute for reasoning. Testing may not be used as evidence of correctness for any but the most trivial of programs. Software engineers some times refer to "exhaustive" testing when in fact they mean "exhausting" testing. Tests are almost never exhaustive. Having lots of tests which give the right results may be reassuring but it can never be convincing. Rather than relying on testing we should be relying in reasoning. We should be relying on arguments which can convince the reader using logic. http://www.soc.napier.ac.uk/course-notes/sml/introfp.htm
52
reledi 3 days ago 0 replies      
I've found that some bootcamps are responsible for this attitude as they preach to have 100% coverage. And no one really questions the experienced and heavily opinionated teacher.

It's good to use hyperbole black and white when teaching so the point comes across easier. But they should be made aware of caveats before they graduate at least.

53
mjevans 4 days ago 0 replies      
I think I'd rather focus on documenting the information flow. Of having the tools to track down where things start to go wrong when there's a problem and I ask things to run with more verbosity.

Initial "complete coverage" should probably start from mockups that test an entire API. The complete part should be that, in some way, the tests cover expected successes AND failures (successfully return failure) of every part of the API, but there's no need to test things individually if they've already been tested by other test cases.

Invariably reality will come up with more cases and someone will notice an area that wasn't quite fully tested. That's where a bug exists, but the golden test cases probably wouldn't have located it anyway. It'll take thousands or millions of users to hit that combination and notice it. Then you get to add another test case while you're fixing the problem.

54
knodi123 3 days ago 0 replies      
I recently broke a unit test by adding one entry to a hash constant (a list of acceptable mime types and their corresponding file extensions). I looked at the test, and it was just comparing the defined constant, to a hardcoded version of itself.

I rewrote the test by converting the constant to a string, taking a checksum of it, and comparing _that_ to a short hardcoded value. Now the test is just 1 line of code, instead of 41! Then I put it through code review, and my team said "What a ridiculous test." But they didn't see any problem in the previous version that compared it to a 40-line hardcoded hash.

It's a weird world.

55
tommikaikkonen 3 days ago 0 replies      
Property-based testing has made testing more productive and fun for me. You write a few lines of code that produce a large amount of tests. The idea is obviously so useful, I'm surprised it's uncommon in practice. When you think about coverage in terms of inputs applied instead of statements executed, property-based testing is far more productive than writing tests by hand.

It's not a silver bullet though. Some property-based tests are easy to write but offer little value. Sometimes you spend more time writing code to generate the correct inputs than the value of the test warrants. It has a learning curve. Still, I think it is the most powerful tool you can master for testing.

56
seabornleecn 2 days ago 0 replies      
I think pursuing 100% test coverage is not a fixed state, it is a must have process to learn how to write tests.

Think about one question first: why did the manager force develop to achieve 100% coverage?There must have some benefits, or the manager might come from the competitor.When standing at a higher position, think of time and organization factors, it might be a good choice.If every engineer in the corporate has the deeply understanding of test coverage as the author, they really do not need to pursue 100% coverage.But in reality, we can see many companies which do not pursue test coverage, their coverage tend to be 0. That's why we need force 100% test coverage in a short time. Engineers need time to form the habit of test their code, and then experience the pain of bad tests. Then they start to think what kind of tests are valuable.

57
vitro 3 days ago 0 replies      
To paraphrase: "Premature testing is the root of all evil".

How I do it is going from rough testing of pages and components to granular testing of those parts which had some error.

For pages, I just run them to see if they display without producing errors, same goes for critical components. This gets me the feeling of roughly tested and from the user perspective working system with little time investment.

Then I test critical business logic, but usually only after some error was reported.

Mind though that I am freelance developer unconstrained by organizational rules.

58
afpx 3 days ago 0 replies      
Many of us have made similar mistakes (especially early in our careers) when taking on new techniques for which we became particularly enthralled. That's why it's a good idea to have a couple 'elders' on-staff so as to not allow youthful passion to wreck havoc. They tend to keep teams pragmatic and lazy (a good thing, in programming).

For instance, I remember all the bad code that I wrote and read circa 1997-1999, after design patterns became the rage.

59
chmike 3 days ago 0 replies      
While 100% code coverage doesn't guarantee 0% bug, it's useful to easily detect new untested code addition and possible bug addition. Another point is that the code looks obviously right by visual inspection, but we want to automate the check. Relaxing the 100% coverage is a lazy slippery slope I don't take with my code.

The danger of 100% percent coverage is that the goal of tests becomes the 100% code coverage and not bug detection anymore.

60
raverbashing 3 days ago 1 reply      
Most of "we should go for 100% coverage" is simply cargo-culting (pushed by "gurus" like Uncle Bob - the negative aspects of the word guru implied)

Not to mention 100% coverage is not guarantee the system works, in practice quite the opposite

Not to mention this BDD crap which only makes my blood boil, it's syntactic yuck disguised as syntactic sugar

61
josteink 2 days ago 0 replies      
> The tragedy is that once a "good practice" becomes mainstream we seem to forget how it came to be, what its benefits are, and most importantly, what the cost of using it is.

Totally agree. You can say this about lots of things really and not just tests.

62
Ace17 3 days ago 0 replies      
See the paper "On the Danger of Coverage Directed Test Case Generation".http://link.springer.com/chapter/10.1007%2F978-3-642-28872-2...The idea is that a test suite can have 100% coverage and still be a very bad test suite.
63
kelnos 4 days ago 1 reply      
I find that, as I'm building something from scratch, the vast majority of the errors I make are just things I didn't think of. Tests don't help there because I can't test on input that I don't even imagine happening. So I generally write few tests, because, to be honest, most code is trivial and algorithm-light. Sure, if I have to write a parser or something a bit more fiddly, I'll write a unit test to be sure that it's doing what I expect, but that tends to be the exception, not the rule. I do write my code with an eye toward later testability if it turns out to be necessary, but I find that to be fairly easy, and also a good measure of if I'm doing the write thing: most code that isn't testable is probably code that's difficult to read and maintain, anyway, so if I look at something and think "oof, how would I ever write a test for that?" I'll usually delete it and start over.

When I have something that should be working, I test it in a more functional/integrative manner, and move on.

Later, I'll write unit tests when I need to. If I want to refactor something, or drastically change the implementation of something, I'll write out some tests beforehand to be sure that the pre and post behaviors match.

I've always thought that TDD is just premature optimization. You're optimizing for the idea that you -- or someone -- will later need to make large enough changes to your code that you'd worry about breaking it. In my experience that's fairly rare, and you spend less time overall if you just write the tests as you need them, not up-front. Yes, writing a test when the code is fresh in your mind will be faster than writing it much later, but then you're writing a ton of test code that likely won't be necessary.

An objection I hear to this is that you're not just writing tests for yourself, you're writing tests for the others who will need to help maintain your code, perhaps after you're gone. I'm somewhat sympathetic to this, but I would also say that if someone else needs to modify my code, they damn well better first understand it well enough such that they could write tests before changing it (if they deem it necessary). Anything else is just irresponsible.

(Note that I primarily work in strongly statically typed languages. If I were writing anything of complexity in ruby/python/JS/etc., I don't think I'd feel comfortable without testing a lot of things I'd consider trivial in other languages.)

(Also note that some things are just different: if you're writing a crypto library, then you absolutely need to write tests to verify behaviors, in part because you're building something that must conform to a formal spec, or else it's less than worthless.)

64
reledi 3 days ago 0 replies      
Striving for 100% coverage is an expensive mistake because as a testing indicator it gives you a false sense of security. But someone has to pay for the time spent writing and maintaining those tests, and fixing the bugs that are still there.

I much prefer to use code coverage as a weak indicator for finding dead code.

65
divan 3 days ago 0 replies      
Good example of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."https://en.wikipedia.org/wiki/Goodhart%27s_law
66
BJanecke 3 days ago 0 replies      
So generally, I write a test when I want to make an assumption an certainty.If I can't be certain that something is doing what it's supposed to I write a test for it, make sense?

So

```Int => add(x, y) => x + y;```

Doesn't get a test, however

```Int => formulateIt(x, y) => (x * y)^y```

Does

67
hasenj 3 days ago 0 replies      
I think unit testing makes sense when you have a function doing some math that can't be easily verified to be sensible by merely glancing at the code for two minutes.

I'm not sure there's much use for it in other scenarios.

68
crimsonalucard 3 days ago 0 replies      
It's a case of convention over common sense.
69
shusson 3 days ago 1 reply      
Has anyone read any papers about the relationship between code coverage and defects?
70
EugeneOZ 3 days ago 1 reply      
Laziness is a kind of populism. Such articles will be always upvoted.
71
EngineerBetter 3 days ago 0 replies      
I'd suggest the tragedy here is an absence of kaizen and team processes that foster continuous improvement. If folks are doing inefficient things, that should be caught by the team in a retro or similar.
16
Keylogger in Hewlett-Packard Audio Driver modzero.ch
485 points by ge0rg  1 day ago   113 comments top 17
1
userbinator 1 day ago 5 replies      
Actually, the purpose of the software is to recognize whether a special key has been pressed or released.

I'm doubtful of the utility of software like this. Every driver and application seems to want to keep a persistent background process running, and because of the natural inefficiency of software (this executable is ~2MB --- why it needs to be this big, I'm not certain; from a brief inspection, all it seems to be doing is controlling microphone mute/unmute), results in a huge waste of resources and new computers which appear no more responsive and than older ones.

However, to put the severity of this problem in perspective, from the description this is not like a typical keylogger that sends keystrokes out to some remote server; it only logs locally.

If you regularly make incremental backups of your hard-drive - whether in the cloud or on an external hard-drive a history of all keystrokes of the last few years could probably be found in your backups.

There's going to be plenty of other sensitive information in your backups, which if you don't want others to read you would use encryption anyway, in which case the point is rather moot.

Any process that is running in the current user-session and therefore able to monitor debug messages, can capture keystrokes made by the user.

...or it could just monitor the keystrokes itself with SetWindowsHookEx() like this process.

Thus, I think the correct reaction to this is more towards the "oops... that wasn't a good idea" than "everybody panic!"

2
amluto 1 day ago 1 reply      
One thing I really like about Linux: random platform-specific hardware features like the mic button or whatever this is are handled by an open source "platform" driver in the kernel. These drivers expose a more or less uniform interface to user code.

So, when I install Linux on a laptop, most or all of the weird laptop-specific buttons just work without OEM crapware or runtime performance hits.

The downside, of course, is that you can't just download fresh crapware to make your brand new laptop fully functional. I'll take that tradeoff.

3
xroche 1 day ago 6 replies      
As a rule of thumb, you have:

 * Decent software companies terrible at making hardware * Decent hardware companies terrible at making software
I yet have to see one that does both correctly. Hardware manufacturers are known to produce the worst code quality you can think of, badly designed, poorly written, undocumented, insecure, bloated.

I have the feeling that the whole IoT problem is also related.

4
drinchev 1 day ago 2 replies      
> Actually, the purpose of the software is to recognize whether a special key has been pressed or released. Instead, however, the developer has introduced a number of diagnostic and debugging features to ensure that all keystrokes are either broadcasted through a debugging interface or written to a log file in a public directory on the hard-drive.

Looks like it's not intentional. Although really poor code-quality process I would say.

5
zollidia 1 day ago 1 reply      
I'm strangely not surprised with HP and their actions (in this case, a lack there of). It reminds me of the Bose issue a year or so back with their products.

And the impact in which HP is going to experience - is nothing. Most people still to this day really don't care/understand on why this is a problem. They just want to get a computer for school, General internet surfing or watch cat videos. (Cat and Dog videos are quite interesting.)

6
arca_vorago 1 day ago 0 replies      
I remember in the late 90's early 2000's when HP was embracing linux and open source... and then they merged with Compaq and I've seen nothing but mistake after mistake from them since.

I'm really tired of seeing companies positioned to make good things and better the world get focused on quarter profits and short term thinking, because it always bites them in the ass eventually.

Mismanagement from the C level up abounds.

7
doreox 1 day ago 0 replies      
> ...or it could just monitor the keystrokes itself with SetWindowsHookEx() like this process.

...which any AV will immediately flag. This allows malware to keylog in a much less detectable way by piggybacking off trusted HP software

8
snowpanda 1 day ago 0 replies      
I archived the HP page just in case: https://archive.fo/FjWUv
9
CodeSheikh 1 day ago 0 replies      
Is this an old article? Conexant was acquired by Philips a while back.
10
stanislavb 1 day ago 2 replies      
Wow. That's going to hit HP
11
vfclists 1 day ago 1 reply      
This is one of the main reasons for libre/free/open/choose_your_term software.

Even when malice is not to be checked for, genuine error, incompetence, forgetfulness or plain indifference must be checked for.

12
donpdonp 1 day ago 0 replies      
MicBleed
13
secfirstmd 1 day ago 0 replies      
"Neither HP Inc. nor Conexant Systems Inc. have responded to any contact requests. Only HP Enterprise (HPE) refused any responsibility, and sought contacts at HP Inc. through internal channels."

A keylogger and this is their response?

I hope they get the shit sued out of them.

14
donpark 1 day ago 1 reply      
googling "conexant keylogger" shows this is not a new problem.
15
0xFFC 1 day ago 0 replies      
This is fucked up world we live in !
16
nailer 1 day ago 1 reply      
To fix the super-wide article:

 document.querySelector('.blogbody').setAttribute("style", "max-width:650px; margin: 0px auto;");

17
wereHamster 1 day ago 2 replies      
Please, use a max-width on text columns. The article is unreadable on a large screen.
17
Self-Compassion Works Better Than Self-Esteem theatlantic.com
409 points by jansho  1 day ago   141 comments top 21
1
octygen 1 day ago 5 replies      
As my psychotherapist put it yesterday: people that seek self-esteem in an unhealthy way are vampires. "You have to become your own bloodsource" she said.

When you seek self-esteem in an unhealthy way, you do things to get approval/validation from others. You'll suck some blood from girls who like you, suck some blood from jobs you apply for that want you, tell friends about all the high-end interviews you have and the cool things you're doing. But after you've gotten what you need from the girls, the jobs, the friends, you realize you never really wanted any of them. And that you wasted your time in the process when you should seek what YOUR OWN PATH is. Self-compassion/kindness is when you become your own sustainable bloodsource of self-esteem and it is critical for your survival and success.

2
nothis 1 day ago 9 replies      
There was an article about procrastination that really stuck with me and it argued something similar: One of the reasons we put off work is because we don't look at our "future selfs" as the same person or even as someone you feel sorry for. Basically, we think "I don't give a damn, let my future self deal with the consequences!". By thinking of your future self compassionately, you can much better motivate your current self to do work you'll depend on being finished at a later date.

This might sound weird but I try this sometimes. "Thanks, past self, for having done this on time, now I can enjoy the weekend because I'm done with this tedious work!" You don't have to literally talk to yourself or anything, but it's IMO a healthy state of mind. It's also the only way I've found to directly tackle the underlying problem of procrastination instead of just telling yourself to "not be lazy, stupid!".

3
samirillian 1 day ago 1 reply      
I fundamentally agree with the basic insight of this essay, but I'd like to add that we should strive to be those friends to each other so we don't need to "auto-sympathize" to such a degree. If the people that you value don't value you for how hot your girlfriend is, or how much money you make, then you won't feel the compulsion to build yourself up that way--to such a degree. Of course, the pervading culture still makes it difficult to ignore those sirens.

And of course, social media plays a huge role in this reduction of the complexity of emotional life to more superficial things, to what can fit in a camera lens or a blurb.

The kind of self-esteem that people have, I imagine, must be like malnourished populations that are also obese. They don't have any lack of social interaction quantitatively, but they're still emotionally/spiritually hungry.

4
11thEarlOfMar 1 day ago 3 replies      
Whether it's self compassion, self confidence, self esteem or some combination of them, what needs to be supported is a sense of agency. That sense that if one really needs to make a change or pursue an opportunity, that they have the wherewithal to do so.

Wherewithal might be 'on my own' or with assistance or guidance from others. It might require creativity or just persistence. But without it, we feel helpless, hopeless and ultimately dependent and depressed.

How can people who lack a sense of agency develop it in a healthy way?

5
tabeth 1 day ago 3 replies      
Friendship and community work better than both. I was actually in the middle of finding evidence to post, but its so numerous it really isn't necessary. Form good unconditional friendships and your happiness will exceed to new heights.

How do you form good unconditional friendships you ask? Hey, I didn't say I had all of the answers. One thing I might add is that, at least in my experience, a bad friendship is actually worse than being alone (though one might hesitate to call it a friendship to begin with), so tread carefully.

6
SirensOfTitan 1 day ago 1 reply      
Albert Ellis (founder of Rational Emotive Behavioral Therapy) railed against the concept of self-esteem most of his career. He remarked that you should rate your actions, not yourself as a person. This seems even more significant when you realize how important failure is to mastery:

> Striving, by its nature, often results in setbacks, and setbacks are often what provide the essential information needed to adjust strategies to achieve mastery.(from the book on learning Make It Stick)

The self serves as a useful model for behavior, but it changes so much based on context. A universal self-esteem makes no sense when you accept the fact that a human being cannot ever exist in isolation (I always exist in relation to my environment).

7
temp246810 1 day ago 8 replies      
Heh. My problem is that I am so damaged that when I become friends or find girls that care for me unconditionally, I devalue that relationship because it feels un-earned.

Something that comes un-earned to me has no value.

This extends to my relationship with myself. I'm hard on others so it only makes sense that I'm hard on myself.

This is what happens when your parents get divorced and you're raised by a shitty step mom. Not that my own mother was that great to begin with (cheated on my dad etc.)

8
aaimnr 1 day ago 0 replies      
What always blows me away is when people who've been watching their minds most of their lives and lived through the most amazing insights we can imagine, witnessing directly how the mind creates the world, how self is an illusion etc. (what neuroscientists can only state from intellectual perspective) tend to say: compassion is the highest form of wisdom.

Kristin Neff, the author of the research in TFA, brought these terms directly from her buddhist experiences AFAIK. Karuna and Meta (compassion and kindness) are very important mind "algorithms" in the Buddhist framework that apart from social and behavioral effects also have quite significant cognitive function. It allows to see things so much clearer, when you understand how complex and interdependent the reality is and how little control over reality we all are.

It's especially telling when you look at the natural progression of these mind states that one is encouraged to develop: Compassion -> Kindndess -> Symphatetic Joy (appreciating wellbeing of others) -> Equanimity.

9
meesterdude 1 day ago 5 replies      
Reading this stuff is always eye opening for me. but actually making it a part of your thinking and perspective is a whole other ball of wax. Often, I end up just forgetting entirely as the days pass - not the content, because I can often recite that upon demand. But the actual adoption of it into ones own mindset.

Very often, I've found myself going "oh yeah, i'm trying to do that!". Along with several other things as well. this is largely what drove me to build my project to adopt such changes in perspective (http://willyoudidyou.com).

And in addition to yourself, it's good to show compassion to others. But it's definitely easier to do this for others once you have done it for yourself.

10
Tonester 1 day ago 4 replies      
I think it's a question of balance, depending on how strong a position you are.

If you are in a position of weakness, feeling sorry for yourself, recovering from something you are judging yourself on - then you absolutely should start with self-compassion, being kind to yourself.

To continue in this mode would ultimately hit your ambition and drive, so if you are in a bit of a stronger position mentally, a higher gear, then boosting your self-esteem is more important.

11
aaimnr 19 hours ago 0 replies      
It's interesting that both building high self-esteem and negative self talk (2 directions, same dimension) seem to be the basic activity of Default Mode Network - the mode of the brain activated during mindless mind-wandering. Usually people spend most of their lives in this state. The amazing part is that there's strong positive corelation between the DMN activation level and how miserable we feel. Judson Brewer has done lot of great research on this. Most meditation techniques go directly against DMN and sometimes result in turning it off for good (look up eg. Gary Weber - https://youtu.be/QeNmydIk8Yo) .
12
gmarx 1 day ago 1 reply      
Most boys think of themselves as attractive and this is stable?

Is this statement true for most of you?

I was a fat kid (and fat young adult) and always thought I was unattractive. Even though it may not have affected me as much as it would had I been a girl, it remains prominent in my self image

13
KhanMahGretsch 1 day ago 0 replies      
I find the terminology a little muddy in this article, as I would not characterise the "narcissistic" behaviours described therein, and the expected rewards, as having anything to do with self-esteem. In my book, self-esteem is inherently intrinsic; appreciating the unconditional value that you, those around you, and every living creature on Earth possesses.

This talk given by Irish comedian Blindboy Boatclub (of Rubberbandits fame) describes the concept as simply, succinctly, and beautifully as I have ever heard.

https://www.youtube.com/watch?v=Zz82P0WqUh4

14
blablabla123 1 day ago 0 replies      
There seem to be various definitions around. Even though a Psychology professor is cited, already at the beginning self-esteem and self-confidence are treated similarly. I really like the following definition of self-esteem: the feeling of (a) being in control of your basic needs and (b) deserving to feel good. This also solves the paradox stated, it just means that you should maintain a certain self-worth.

If your self-worth is too low, you may be used by other people. If it's over the top, you may act arrogant. So yeah...

Self-confidence is more like the

15
kelvin0 1 day ago 1 reply      
Well, this article brings a very insightful perspective into view. It's not often you come across content that might profoundly affect your world views!

Also, at the opposite end of the spectrum here is a reference to a show called 'Black Mirror' with an episode which is about the need to constantly be seeking other peoples approval (for self-esteem boosting):

http://www.imdb.com/title/tt5497778/

I highly recommend watching this, very sobering.

16
michaelborromeo 1 day ago 0 replies      
There's an awesome book by Kamal Ravikant that relates to the idea of self-compassion: https://www.amazon.com/Love-Yourself-Like-Your-Depends-ebook...
17
Tharkun 1 day ago 3 replies      
> try talking to yourself like you would your best friend.

So swearing at myself, basically? Strange advice.

18
euske 1 day ago 0 replies      
This is why there's York and Zach in Deadly Premonition!
19
ouid 1 day ago 2 replies      
>Theres nothing wrong with being confident.

This is obviously false, counterexamples are everywhere you look.

20
thunder-ltu 1 day ago 1 reply      
21
menacingly 1 day ago 1 reply      
In which a new set of overconfident, simplistic solutions harshly criticize the previous set of overconfident, simplistic solutions. If only we had known what we were doing when we unleashed the dark force of self esteem we could have stopped bullying!

I'm sure being patient and understanding with yourself (and with everyone else) is an approach to life that has a lot of benefits, but these articles are nothing more than pornography for our narcissistic idea that we have this huge power to shape who our kids are by simply embracing a new outlook from a paperback or using new words.

18
Drone Uses AI and 11,500 Crashes to Learn How to Fly ieee.org
442 points by kasbah  1 day ago   118 comments top 31
1
cr0sh 1 day ago 2 replies      
This is interesting. It really shows how deep learning has become almost "Lego-like".

1. Imagine the problem, add a camera or two (or more).

2. Build/use a pre-trained ImageNet model as a starting point (probably using TensorFlow/Keras).

3. Build a dataset, split it into test, train, validation sets.

4. Train the model further.

5. Test and validate the model. Lower the error rate (don't overfit though!).

6. Profit?

As far as what language to use, depending on the speed of whatever you're trying to do, Python would likely work fine in the majority of cases. If you need more than that, C/C++ is around the corner.

Oh - and OpenCV or some other vision library will probably be used (but just to grab the images, maybe a little pre-processing).

You wouldn't have to use this exact pipeline (you could substitute other deep learning libs, other vision libs, other languages, etc) - but the basics are to start with a well-known CNN model, preferably "pre-trained", then apply your own dataset(s) to the task to get it to work better. Not much more tweaking needed, the biggest thing is to get (or be able to synthesize from what you do have) enough data to throw at it (and have a fast enough system to train it in reasonable time).

We've seen this approach many, many times; it seems to work well for a ton of domains and problems. Again - very "Lego-like"...

2
flaviuspopan 1 day ago 2 replies      
"...success is simply a continual failure to crash"

My programming methodology has been validated at last!

3
soared 1 day ago 2 replies      
Similar to the rc rally car that learns to drift by driving and crashing itself [1]. I'm collecting articles/videos where you can see machines teaching themselves things like flying/driving/video games [2].

[1] http://spectrum.ieee.org/cars-that-think/transportation/self...

[2] https://www.reddit.com/r/WatchMachinesLearn/

4
stupidcar 1 day ago 1 reply      
I know people have trained AIs to do more sophisticated things than this, but something about watching from the drones perspective as it scans its environment and moves around really makes you feel like you're watching a real intelligence at work.
5
Clanan 1 day ago 1 reply      
This is how Uber is going to train it's self-driving cars, isn't it?
6
rep_movsd 1 day ago 2 replies      
Would it not be cheaper and faster to simulate a drone and fly it through virtual 3d environments and still learn?

Or would the physics be too complex to model well for simulation?

7
gene-h 1 day ago 1 reply      
I wonder how well the learned policy generalizes to other environments. Places like an art gallery, outside, or a cave. Could the network have learned something fundamental about monocular vision?

It would also be interesting to see if the learned policy corrects for perturbations. If we tilt the drone by hitting it, will the policy stabilize it again?

While this is a really cool result, I suspect that this approach might not be the best way to control UAVs. Dragon flies are ready to fly, avoid obstacles, perch on stuff, hunt down prey right after warming up their wings for the first time. This implies that a good amount of the flight behavior is 'hard-coded.'

Although I really can't wait until someone expands upon this approach. So instead of outputting left or right, the network could output 'stick vectors,' which translate to control stick commands. Maybe even have the network take in some sensor data and a 'move in this direction' vector. Add in a pinch of sufficiently fast video processing and we could probably learn how to do fly through an FPV course or do aggressive maneuvers to fly through someone's windows[0]

[0]https://www.youtube.com/watch?v=MvRTALJp8DM

8
rcarmo 1 day ago 0 replies      
This reminded me of the Douglas Adams books, in which Arthur Dent eventually learns how to fly by "throwing himself at the ground and missing".

Also, the flight had an almost organic quality to it somehow. Spooky, but cool.

9
vanjoe 1 day ago 3 replies      
What determines autonomous flying? Couldn't the drone just hover in the middle of the room and not crash? Would that count? Don't you need some sort of increased score for moving around?
10
peter303 1 day ago 0 replies      
Imagine if Google taught their self-driving cars that way: 10,000 crashes. I think self-driving was completely procedural back when it started 15 years ago. But now with faster and better understood neural nets, some parts like recognizing objects have been replaced by deep learning.
11
fredley 1 day ago 0 replies      
> There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss. ... Clearly, it is this second part, the missing, that presents the difficulties.
12
udkl 1 day ago 1 reply      
Why would they not use a general purpose classifier [1] instead ?

Sure, the tagging of objects in the field of view in this model may be unnecessary but you leverage an existing model that should allow the drone to 'think' beyond the current limited "obstruction here". It could at-least have been used as a base model to build upon.

[1] https://www.youtube.com/watch?v=_wXHR-lad-Q

Personally, I'm also looking forward to neural networks modeled after real brains [2] .... but the tech to accurately scan the complex interconnections in larger brains seems far away.

[2] http://www.smithsonianmag.com/smart-news/weve-put-worms-mind...

13
folli 18 hours ago 0 replies      
Cool! Do I understand correctly, that the splitting into the part where the drone was doing fine and the part where the drone is crashing (i.e. the annotation of the dataset) was still done manually?

A similar approach using unsupervised learning would be even cooler...

14
bluetwo 1 day ago 2 replies      
Does it really predict 'weather' to move forward or not?

I think you mean 'whether'.

15
jayeshsalvi 14 hours ago 1 reply      
Drone Uses AI and 11,500 casualties to learn How to kill terrorists ... coming soon /s
16
lenkite 1 day ago 0 replies      
Its a pity that it was programmed to split for a decision only between left and right images. It could have avoided the chairs by flying higher if there were top and bottom images. Ideally, the number of decision-point images should be the area of the FOV divided by the drones forward surface area.
17
otto_ortega 1 day ago 2 replies      
Pretty cool.

Why did they use an input that does NOT provide any information about depth/distance from objects?

18
lottin 1 day ago 1 reply      
Somebody should do a cost-benefit analysis of all this machine learning business. For instance, how much did this project cost and what did they get in return? I'm not suggesting it's not worth it, just curious to know how the numbers turn out.
19
bluetwo 1 day ago 1 reply      
That's silly. It took me no more than 1,000 crashes to learn to fly my drone.
20
artursapek 1 day ago 1 reply      
Like watching a baby learn to walk! We're truly in an exciting age for technology.
21
rodionos 1 day ago 0 replies      
A coding drone uses AI and 11,500 bug reports to learn how to code. Are we there yet?
22
sumoboy 1 day ago 0 replies      
Learn to fly scenarios vs how tesla would build an autonomous drone is my question for something viable.
23
return0 1 day ago 2 replies      
Cant they use a simulation to do the training and then finetune it using the poor drone?
24
monksy 1 day ago 0 replies      
In other words, they're just learning not to crash/failing to crash.
25
sc0tfree 1 day ago 0 replies      
Oh, great. Next step: weaponization and AI target analysis.
26
kevin2r 1 day ago 0 replies      
what happens if after trained we move the drone to another location, will those learned "abilities" be reused? making easier to fly at the new location.
27
contingencies 1 day ago 0 replies      
This really is the beginnings of SkyNet.
28
sheeshkebab 1 day ago 2 replies      
Pretty close to 10000 hours to get good at something?
29
tormeh 1 day ago 0 replies      
I think this is how babies learn to walk.
30
Para2016 1 day ago 2 replies      
Pretty cool!

I was curious, in the article it mentions difficulties navigating through glass environments, could they combine visual information with sonar to avoid crashing into glass and other transparent barriers?

31
albanium 1 day ago 0 replies      
I don't understand why it didn't learn to fly; using a flight simulator.
19
President Trump Dismisses FBI Director Comey washingtonpost.com
593 points by DamnInteresting  3 days ago   297 comments top 34
1
Animats 3 days ago 5 replies      
The FBI director is supposed to have a 10 year term. That went in after J. Edgar Hoover died. Nobody wanted another J. Edgar Hoover FBI Director for Life situation, but having the FBI director be a "pleasure of the President" appointment made it too political.

This makes Andrew G. McCabe acting FBI director. He's in the civil service, not a Presidential appointment. He was an FBI agent and worked his way up. From what little is available about him, he seems to be good at the job.[1] As civil service, he can only be fired for cause.

Appointing a new FBI director requires Congressional approval, and will be controversial.

[1] http://www.latimes.com/nation/na-la-fbi-deputy-director-2016...

2
fooey 3 days ago 5 replies      
Seemed to be confirmed as real, so here are letters being floated as from Trump, Sessions and Rosenstein firing Comey and blaming the Clinton investigation

Trump: https://pbs.twimg.com/media/C_apTsDXoAAVKYn.jpg

AG Sessions: https://pbs.twimg.com/media/C_apUYrXgAAihp2.jpg

Deputy AG Rosenstein: https://pbs.twimg.com/media/C_apVImXcAIKhfm.jpg

 Dear Director Comey: I have received the attached letters from the Attorney General and Deputy Attorney General of the United States recommending your dismissal as the Director of the Federal Bureau of Investigation. I have accepted their recommendation and you are hereby terminated and removed from office, effective immediately. While I greatly appreciate you informing me, on three separate occasions, that I am not under investigation, I nevertheless concur with the judgment of the Department of Justice that you are not able to effectively lead the Bureau. It it essential that we find new leadership for the FBI that restores public trust and confidence in its vital law enforcement mission. I wish you the best of luck in your future endeavors.

3
TheBiv 3 days ago 7 replies      
"Trump just fired the man leading a counterintelligence investigation into his campaign, on the same day that the Senate Intelligence commitee requested financial documents relating to Trump's business dealings from the treasury department that handles money laundering." -Comment from reddit that sums up how strange this is.
4
rwnspace 3 days ago 12 replies      
381 points, 171 comments, 1 hour ago; as of writing.

Why is this on the second page of HN, and not pole position? I assume/hope that there is some mechanism that stops new content from dominating other content too rapidly.

5
avs733 3 days ago 1 reply      
Sally Yates investigates Trump's cabinet: Fired by the Trump administration

Preet Bharara investigates Trump's cabinet: Fired by the Trump administration

Director Comey Investigates Trump's cabinet: Fired by the Trump administration

6
abalashov 3 days ago 2 replies      
I'm pretty sure the customary reply from the MAGA camp will be that these are all political appointees, and serve at the President's pleasure.

All that is formally true. But it doesn't make it any less uncanny that such a person would be fired at the very moment he ramps up an investigation into Trump's business activities.

7
jjordan 3 days ago 7 replies      
Say what you want about the politics, but it's inarguable that Comey, whether he wanted to or not, had become a partisan lightning rod for both sides. The unbiased credibility of the FBI was at stake with Comey at the helm, and this is probably a good move for the country.
8
davesque 3 days ago 0 replies      
Mods, please let this one live. This is big news and we can't ignore it. I don't care what the policies are about political stories. I also don't care if I can go somewhere else to read about it. I want to know what _this_ community's opinions are on the matter.
9
Matt3o12_ 3 days ago 4 replies      
Well I would be certainly interested in the circumstances especially considering that I always believed he was pro trump. Some even said he played an important role Trump won the election because he opened an investigation into Clinton's emails right before the election.
10
curiousgal 3 days ago 2 replies      
Flashbacks to Nixon's downfall.
11
colemannugent 3 days ago 2 replies      
A friendly reminder to both sides that whatever the current administration does, the next can undo.

This is especially important when the majority party decides to give itself more power and inadvertently gives their successors more than they intended.

12
rrggrr 3 days ago 1 reply      
Comey's book deal is going to be enormous. His great, great, great grandchildren will be buying Maserati's with the proceeds. He just needs to withstand another six months of testifying on the hill in front of at least two standing committees and probably a special committee.
13
grizzles 3 days ago 1 reply      
He was too much of a wildcard. Trump wants to wrap up the Russia thing and he needs someone who is more subservient to do that.
14
favorited 3 days ago 1 reply      
If I'm not mistaken, he's the first FBI director to be fired.

Edit: I was, in fact, mistaken.

15
satysin 3 days ago 1 reply      
This is going to make an amazing movie in a decade or two.
16
fencepost 3 days ago 0 replies      
Trump just wanted to be sure that Comey's statement last year about the iPhone hack cost was true.

"more than I will make in the remainder of this job, [...]"

17
iamjeff 3 days ago 0 replies      
President Trump cares little about protecting the Office of the President...his administration has a well-documented history of putting the thumb on the scale regarding the investigation of collusion between his campaign and Russian agents/agencies...this is damaging the credibility in the office...this firing was also clearly decided on and then the rationale was secured afterward...it baffles the mind that Trump rationalizes this executive action by claiming that Comey was "mean to Clinton" when only a few days ago Comey had his trust...the reasoning cited, and involvement of Sessions in interfering an investigation that he recused himself from, is bogus... It is not unreasonable to claim that a cover-up is in full swing!
18
tannhauser23 3 days ago 1 reply      
Everyone should read the letter that the Deputy Attorney General wrote to the Attorney General in recommending that Comey be fired. It's brutal: http://apps.washingtonpost.com/g/documents/politics/fbi-dire...

This and Comey's recent misstatements to Congress about Huma Abedin forwarding sensitive emails to Anthony Weiner are alone grounds for Trump to fire Comey. Whether Trump had other motives... I mean, who knows? It's all speculation.

19
jacquesm 3 days ago 7 replies      
Someone better than me in English, please explain the meaning of the word 'recuse'?
20
hota_mazi 3 days ago 2 replies      
Trump is soon going to run out of people to fire.
21
thrillgore 3 days ago 0 replies      
At this point we should demand an immediate Impeachment.
22
wonder_bread 3 days ago 0 replies      
Which can only mean something else happened today that Trump's covering up in the headlines by firing Comey
23
Hermitian 3 days ago 0 replies      
Why isn't this on the front page?
24
Beltiras 3 days ago 0 replies      
Oh, this has got to burn. The man that gave him the office......
25
AnimalMuppet 3 days ago 0 replies      
Mr. Comey just acquired a badge of honor. No, I'm not being sarcastic. It's getting to the point where being fired is more honorable than remaining.
26
newsat13 3 days ago 6 replies      
Can someone clarify if comey is pro trump or not?
27
danielvf 3 days ago 2 replies      
Anyone have a link to the contents of the memo that recommended firing, and contained the reasons for that recommendation?
28
romeisburning 3 days ago 0 replies      
The thought of Trump nominating an FBI director is bone chilling. Summed up with what's known about Flynn and every other suspicious data point we have what I am increasingly sure that is a modern day coup of the USA.

Time to pause tech and effect change, this is leading to a future darker than I can possibly contemplate.

29
wtf_is_up 3 days ago 0 replies      
It's about time. Comey has politicized the FBI in ways that have damaged its reputation for years to come.
30
whistlerbrk 3 days ago 0 replies      
It's time for this dictator to be impeached. People need to start marching on Washington.
31
bingomad123 3 days ago 0 replies      
Why are we discussing politics on HN ?
32
hsnewman 3 days ago 0 replies      
Christi will be appointed FBI director, and Comey will get a nice job in the Trump organization for falling on the sword.
33
mtgx 3 days ago 6 replies      
Hopefully there's a silver lining and that this means the encryption backdoor push (led by Comey) will slow to a crawl or be forgotten. He was already preparing a push for FISA Amendments renewal together with Dianne Feinstein (who is apparently having a change of heart about her own retirement).
34
Shivetya 3 days ago 3 replies      
Trump had to dismiss Comey. Comey damaged the FBI in his recent sessions with Congress to the point the FBI was on the defensive trying to set the record right. Considering the erratic behavior with both the Clinton and Russia issues it is doubtful that Comey was capable of continuing in such an office.

Like or dislike Trump, there have been many on the Democratic Party side calling for Comey to be gone and the odd part is many are now rushing to the guy's defense. That and he was fired over incorrect testimony about a Clinton aide, testimony that painted her in a worse position than deserved.

Irrational is the best way to describe the reaction of many. I was really shocked by some in the press, it is near impossible to separate journalist from opinion editors when they cannot separate the roles themselves

20
SQL Notebook sqlnotebook.com
448 points by mmsimanga  3 days ago   102 comments top 20
1
electroly 3 days ago 10 replies      
Hello everyone! Author here. I didn't expect anyone to find this repo, much less post it on Hacker News!

This project is inactive for two main reasons:

- SQLite is not a great general-purpose SQL engine. Poor performance of joins is a serious problem that I couldn't solve. The virtual table support is good but not quite good enough; not enough parts of the query are pushed down into the virtual table interface to permit efficient querying of remote tables. Many "ALTER" features are not implemented in SQLite which is a tough sell for experimental data manipulation.

- T-SQL, the procedural language I chose to implement atop SQLite, is not a great general-purpose programming language. Using C# in LINQpad is a more pleasant experience for experimentally messing around with data. R Studio is a good option if you need statistical functions.

I think several good solutions in this problem space exist. A local install of SQL Server Express can be linked to remote servers, allowing you to join local tables to remote ones. That setup serves nearly all of SQL Notebook's use cases better than SQL Notebook does. LINQpad is also very convenient for a lot of use cases.

I appreciate the interest! I may spin off the import/export functionality into its own app someday, as I had a lot of plans in that area, but I think SQL Notebook as it stands is a bit too flawed to develop fully.

2
bobochan 3 days ago 3 replies      
This looks very interesting.

I recently had to teach a series of workshops on SQL and I was trying to figure out the best system to allow students to independently work with small datasets without having to install any software. I found Alon Zakai's absolutely fantastic version of SQLite in JavaScript here:

https://github.com/kripken/sql.js

I coupled that library with a CodeMirror editor and got a working web based environment very quickly.

3
lima 3 days ago 1 reply      
Jupyter/IPython + https://github.com/catherinedevlin/ipython-sql is a wonderful workflow for interactive DB exploration.
4
nrjames 3 days ago 1 reply      
I generally use the Firefox SQLite Manager extension when I need to explore SQLite databases. It serves its purpose pretty well, though it has some annoyances and UI quirks. https://addons.mozilla.org/en-US/firefox/addon/sqlite-manage...
5
TeMPOraL 3 days ago 0 replies      
Ouch, that would be very useful to me had I known about it two months ago, when I was exploring the database dump from my old Wordpress blog (I'm finalizing the process of re-launching it as a static site). I managed though, by combination of MySQL Workbench and Common Lisp REPL.

Anyway, bookmarking for the next time I'll need to play with relational data.

6
probdist 3 days ago 1 reply      
Looks pretty neat. Reminds me a bit of Linqpad, https://www.linqpad.net/ which I've also never used.
7
grouseway 3 days ago 0 replies      
Neat.

-How about import from clipboard (useful for cut and paste from excel)

-It doesn't seem to recognize tab delimiters in a .txt file. Maybe the import window should have a delimiter selector?

-Does it have a crosstab/pivot tool? Most sql dialects are lacking here because they make you explicitly define crosstab columns which is a pain for exploration work.

8
yread 3 days ago 0 replies      
Hmm looks nice but last commit was 8 months ago https://github.com/electroly/sqlnotebook
9
ckdarby 3 days ago 6 replies      
Can't exactly see the value this brings that Apache Zeppelin doesn't already offer.

https://zeppelin.apache.org/

10
agentultra 3 days ago 0 replies      
I've always wanted a nice SQL-oriented "notebook" type of application.

I get something of this experience in Emacs via `org-mode`, `sql-mode`, and `ob-sql-mode` minus the data-importing functionality... though with babel it's probably doable in a code block using a script.

Bonus: org-mode lets you export to many formats which makes sharing results quite easy.

11
stared 3 days ago 0 replies      
For having R in notebooks (similar to Jupyter Notebooks) I really recommend http://rmarkdown.rstudio.com/authoring_knitr_engines.html.

As a side benefit, it is easy to ggplot results. :)

12
carlosgg 3 days ago 0 replies      
I will check it out. You can also use R notebooks to embed SQL code in notebook format.

https://blog.rstudio.org/2016/10/05/r-notebooks/(scroll down to "Batteries included")

I was playing around a bit with it:

https://carlosror.github.io/baseball_mysql/

13
Dnguyen 3 days ago 1 reply      
In my daily work I often have the need to analyze excel and csv files from clients. I use http://harelba.github.io/q/ and it worked most of the time. But this one seems promising. Especially being able to query data from a file and join with data from a database.
14
educar 3 days ago 0 replies      
Very nice, I have been using https://addons.mozilla.org/en-US/firefox/addon/sqlite-manage... so far. Looks like this can replace it.
15
daveorzach 3 days ago 1 reply      
Is there any Windows SQL software that can use system/machine ODBC data sources? My company uses OpenLink's ODBC drivers to access our main database (Progress OpenEdge.) I have no problem using Python, Pandas, and pyodb to connect to the data base but it isn't the best environment to develop queries.
16
krylon 3 days ago 0 replies      
At work, I routinely have a copy of SQL Server Management Studio open for the odd ad-hoc query I need to run against our ERP system's database.

This tool looks like it might be a useful replacement for this purpose, especially if it can handle CSV data, as well.

17
bognition 3 days ago 1 reply      
Windows only is a shame, nearly all devs I know use OSX or linux.
18
kencausey 3 days ago 0 replies      
Anyone else understand what they are referring to in the Getting Started notebook about a 'CREATE menu'? I don't see it anywhere.
19
bendykstra 3 days ago 0 replies      
I'm curious why it is not possible to import data from an SQLite file.
20
iagovar 3 days ago 0 replies      
Would this app be nice for a beginner with DB's for data analysis?
21
Recovering from Burnout and Depression kierantie.com
435 points by kierantie  15 hours ago   194 comments top 28
1
55555 12 hours ago 18 replies      
I think a big factor in burnout is that most work is ultimately meaningless, or even morally wrong, and on a deep level, we're aware of that. Why is it so hard to make yourself work? Probably because you actually should be doing something else with your life.

If someone paid you money to kick a dog, you'd feel a strong urge to do something else. That's because you shouldn't kick dogs. But when we feel the same urge to not work, we read articles (not this one so much) that are essentially lists of ways to trick ourselves into doing things that don't matter or which will make the world a worse place.

You may have an idea that you know will make you a millionaire and which you could build, but you just can't force yourself to because fundamentally money won't make you happy and the idea is meaningless at best.

At least this is often the case for me.

2
rubicon33 9 hours ago 3 replies      
I want to highlight something that the article touched on, which for me, was a big source of burnout.

"Breakdown of community"

This can happen if you work remotely, or work in-office with a team that isn't collaborating effectively. If the work you do is isolating and you rarely collaborate with others, you may suffer burnout.

At our core we all want to feel like we're part of something bigger than ourselves. Being part of a team, even if it's as contrived as an office team is, can still be surprisingly important for ones mental health.

If you're feeling burned out consider if a lack of engagement with your peers could be a contributor.

That's my 0.02 at least.

3
abhi152 21 minutes ago 0 replies      
I feel that this is a over simplified version of a much deeper problem and cannot be concluded based on the experience of the Author alone. There are many things that cause burnout and many different reasons that cause depression. In the case of author the Work did it but there are people in this world who get burnt out because of sickness of their loved ones or even because of ambition & their vision. Interestingly the word depression is not even mentioned in the article.
4
failrate 11 hours ago 1 reply      
My recovery included a regimented sleeping, eating, and exercise plan that I introduced in stages. If you have to pick just one to start with, it is a toss up between going to sleep at the same time every night or going for a walk every day.

I also never work overtime anymore.

Still not totally okay, but not completely burnt out anymore.

5
twfarland 10 hours ago 1 reply      
I burned out three times in salaried jobs. Mostly due to chaotic leadership. Hard work isn't the problem. Chaos and bad leadership is the problem. Moved to contracting, and have been fine since.
6
kierantie 12 hours ago 5 replies      
Hey everyone - thanks so much for reading! Burnout and depression is a topic that not enough of us talk about, even though that's often the best solution.

If you know anyone struggling with burnout or depression, or you just enjoyed my article, I'd be forever grateful if you'd share it with them on Twitter or Facebook, to help spread the word. Thanks so much!

7
throwaway8800 5 hours ago 0 replies      
I think the most striking thing about burnout is that, in my experience, it actually takes time to recover from it. Like a wound that requires healing.

It wasn't a situation where you simply remove a stressor and everything automatically gets better. A problem was created in my brain and it took a long time before I was functioning properly again.

8
postfacto 12 hours ago 2 replies      
For me the cause for burnout was having to deal with a combination of politics, the fact that those determined my lack of technical input, that I wasn't allowed to perform at my best because I lacked input, and then getting beaten with the underperformer stick while the project was going down the tubes.
9
soulnothing 5 hours ago 0 replies      
I've burnt out on several occasions. I'm currently battling with it right now. Part of it is my career has regressed in pay and challenge year over year. I can barely accept doing something relatively pointless. But I need to grow or challenge myself to some extent. In the long run, I work to pay the bills. I have a million other things I want to do.

But what do you do when both your personal and professional life collapse at the same time. That's the boat I'm in right now. As soon as I'm done with work. I practically start working on salvaging what I can of my personal life. I forcefully have dragged myself to the doctors. But am not getting much help as of yet from that field.

10
milesf 7 hours ago 0 replies      
A nurse once told me "burnout is actually heartache in disguise". While I don't complete agree with that statement, I think there's some truth in it.
11
kornakiewicz 7 hours ago 1 reply      
One of my favourite quotes from Edward Sapir (known for Sapir-Whorf hypothesis):

The major activities of the individual must directly satisfy his own creative and emotional impulses, must always be something more than means to an end. The great cultural fallacy of industrialism, as developed up to the present time, is that in harnessing machines to our uses it has not known how to avoid the harnessing of the majority of mankind to its machines. The telephone girl who lends her capacities, during the greater part of the living day, to the manipulation of a technical routine that has an eventually high efficiency value but that answers to no spiritual needs of her own is an appalling sacrifice to civilisation. As a solution of the problem of culture she is a failure the more dismal the greater her natural endowment. As with the telephone girl, so, it is to be feared, with the great majority of us, slave-stokers to fires that burn for demons we would destroy, were it not that they appear in the guise of our benefactors. The American Indian who solves the economic problem with salmon-spear and rabbit-snare operates on a relatively low level of civilisation, but he represents an incomparably higher solution than our telephone girl of the questions that culture has to ask of economics. There is here no question of the immediate utility, of the effective directness, of economic effort, nor of any sentimentalizing regrets as to the passing of the "natural man." The Indian's salmon-spearing is a culturally higher type of activity than that of the telephone girl or mill hand simply because there is normally no sense of spiritual frustration during its prosecution, no feeling of subservience to tyrannous yet largely inchoate demands, because it works in naturally with all the rest of the Indian's activities instead of standing out as a desert patch of merely economic effort in the whole of life. A genuine culture cannot be defined as a sum of abstractly desirable ends, as a mechanism. It must be looked upon as a sturdy plant growth, each remotest leaf and twig of which is organically fed by the sap at the core. And this growth is not here meant as a metaphor for the group only; it is meant to apply as well to the individual. A culture that does not build itself out of the central interests and desires of its bearers, that works from general ends to the individual, is an external culture. The word "external," which is so often instinctively chosen to describe such a culture, is well chosen. The genuine culture is internal, it works from the individual to ends.

12
spatulon 11 hours ago 0 replies      
This struck pretty close to home. I quit my job because of burnout, and I'm currently on month five of what I expected to be a two or three month break before finding another job. I still don't feel ready to go back into the real world.

Like the author, work overload was not the problem - I almost never worked more than 40 hours per week. Finding support from family/friends has been difficult; I've been desperately trying to avoid the stigma attached to the word , and as a result I don't think many people realise what I've been going through, and assume I've just decided to bum around for a bit.

13
hn017132 11 hours ago 1 reply      
I burned out in 1999 and never really recovered. I miss some of the work, but not the stress, not the politics nor gamesmanship. Instead of creating capital value for faceless shareholders I've spent the past nearly 20 years creating financial and personal value for myself.
14
thatonecoderguy 12 hours ago 1 reply      
This hit spot on for me. I believe I'm currently dealing with and realized not long ago that Burnout was real and not an arbitrary term that people through around.
15
mkalygin 7 hours ago 0 replies      
Recently I was feeling very frustrated about my work, about what I do in my life. This was lasting for about 1 month. I've noticed that in such periods I compare myself to others and intentionally think that I'm worse. Like literally the most useless person in the world. Usually I find any particular metric (even meaningless) and compare. This is very self-destructing activity.

What helps me in fight with burnout is realising what my strong sides are. I just try to do what I'm good at, and I stop comparing myself to others because of obvious evidence that I'm not. And of course I get more rest, more sleep and switch to creative hobby activities more often. Like an author, I reevaluate my goals and priorities and become in sync with my life again.

16
xivusr 12 hours ago 0 replies      
I can totally relate to this. I experienced this after working steadily 4 years and then having a close friend pass unexpectedly. Suddenly everything I was spending all my time on felt like a waste of time. Now, almost two years later I'm doing better and even looking to work in a non-remote scenario. I think it's great to be on the lookout for signs of burnout, but on the other hand it's equally important to use our time wisely and do the things we love.
17
bholdr 7 hours ago 0 replies      
Very interesting and very nicely written! I was thinking about this myself lately (https://medium.com/@yansh/who-do-you-want-to-be-in-life-ca8f...)..

I think the advice to slow down, take a break, refocus is key to figuring things out, however, not everyone can afford to do so. It's a risk, and there is always a trade off. So I wouldn't frame the article as a guide, because it's different for everyone.

18
Karupan 5 hours ago 0 replies      
As someone recently diagnosed with depression and anxiety, this stuck a chord. For me the hardest step was to prioritize my health over my job and force myself to get help.

Fortunately, have some savings like the author and am taking a break for a while. Thanks for this timely post!

19
faragon 10 hours ago 0 replies      
TL;DR: Don't try so hard. Like Queen's song. [1]

[1] https://www.youtube.com/watch?v=b7kUc5RcMqc

20
P4rzival 13 hours ago 2 replies      
This is also a huge problem for social workers, especially working with high risk clients. A lot of times the departments are unfunded so the workers also do not get the treatment/counseling they really need.
21
BertPhoo 12 hours ago 0 replies      
How timely; last night I typed 'how to recover from burnout' into google. Thank you.
22
rjeli 13 hours ago 11 replies      
Why do I never read articles targeted at other high-stress jobs (lawyer, med student, etc etc)? Do software engineers have a unique culture that identifies this danger? Or are we the only ones that get burned out, maybe because of some self-selection into the field?
23
ggggtez 12 hours ago 3 replies      
Are there any studies on how long engineers take to recover from burnout? 6 months sounds about right from what I've heard anecdotally.
24
saral 10 hours ago 0 replies      
This has certainly been eye-opening read for me. I believe a lot of us can relate to the traits mentioned in the blog, this will lead us to take necessary steps at the right time.

>Burnout offers a hidden silver lining.

In the end this is what leads to satisfaction in life.

25
draw_down 12 hours ago 1 reply      
Everything in that list is happening for me right now, with perhaps the exception of the mismatch between my values and my company's. I tell my manager about it and he has been trying to rearrange things to try to help me, but I feel like there is probably a reason things got to be this shitty, and that reason will probably keep on happening. I don't know what to do. My last workplace was bad in many of the same ways, and I don't feel like I have the energy to enter into yet another employment situation brimming with optimism, only to have it turn into the suck once again.
26
notadoc 12 hours ago 0 replies      
Take a vacation / break, refocus on personal priorities. Learn what makes you happy, and do that - at least outside of work.
27
backpropaganda 10 hours ago 3 replies      
If any of you are suffering from depression, you should seriously try microdosing LSD.
28
suryakrishna 10 hours ago 1 reply      
I was working for the software giant based in Seattle. I experienced all the emotions mentioned in this post. Lucky that you had an option to take a break for 6 months. I cannot quit my job and take a big break as my visa does not permit this but I did quit my job and spent a month looking for another job which was even more stressful. Currently, I am lucky that I work for a company which truly values employees. I am currently recuperating and it is gonna take some time. The important learning is never to allow this in the first phase, when you have a inception of a thought that something is not going right, get on it and fix it and never ever think about it again.
22
BBR, the new kid on the TCP block apnic.net
355 points by pjf  3 days ago   45 comments top 12
1
notacoward 3 days ago 3 replies      
It's almost irresponsible to write an article on this topic in 2017 without explicitly mentioning bufferbloat or network-scheduling algorithms like CoDel designed to address it. If you really want to understand this article, read up on those first.

https://en.wikipedia.org/wiki/CoDel

2
brutuscat 3 days ago 0 replies      
First saw it at the morning paper: https://blog.acolyer.org/2017/03/31/bbr-congestion-based-con...

This is the story of how members of Googles make-tcp-fast project developed and deployed a new congestion control algorithm for TCP called BBR (for Bandwidth Bottleneck and Round-trip propagation time), leading to 2-25x throughput improvement over the previous loss-based congestion control CUBIC algorithm.

3
netheril96 3 days ago 0 replies      
Network performance across national borders within China has been abysmal since the censorship got much more serious. BBR seems promising, so more and more people (that includes me) who bypass GFW with their own VPS has been deploying BBR, and seen marvelous results.
4
huhtenberg 3 days ago 2 replies      
Any data on BBR vs Reno and Vegas sharing?

Link capacity estimation is easy. It's the co-existing gracefully with all other flow control options that's tricky.

5
emmelaich 3 days ago 0 replies      
This article is not only a great intro to BBR, but an excellent introduction the history of flow control.

Congrats to Geoff and his team.

6
abainbridge 3 days ago 1 reply      
Not be confused with BBR enhancing the Mazda MX-5:https://www.pistonheads.com/news/ph-japanesecars/mazda-mx-5-...

Also significantly reduces latency and increases throughput :-)

7
skyde 3 days ago 4 replies      
How can we use it today! Is it in Linux code already and easy to enable ?
8
emmelaich 3 days ago 0 replies      
> ... the startup procedure must rapidly converge to the available bandwidth irrespective of its capacity

It seems to me that you'd be able to make a rough guesstimate by noting the ip address; whether it's on the same LAN, or continent/AS.

It wouldn't matter if you got it very wrong as long as you converged quickly to a better one (as you have to do anyway)

9
kstenerud 3 days ago 0 replies      
It seems like the best way to handle this situation is to assume that all other algorithms are hostile, and to seize as much bandwidth as you can without causing queue delay. That would reduce the problem set to a basic resource competition problem, which could then be solved with genetic algorithms.
10
skyde 2 days ago 0 replies      
Would adding this only to the http reverse proxy machines provide most of the benefit without have to patch all servers.

This seem to have the greatest effect over wan links.

11
raldi 3 days ago 0 replies      
For those dying to know what it stands for: Bottleneck Bandwidth and Round-trip time
12
gritzko 3 days ago 0 replies      
Sounds like they adapted LEDBAT delay measuring tricks.
23
110M-year-old dinosaur discovered with skin and soft tissue nationalgeographic.com
348 points by vwcx  9 hours ago   95 comments top 13
1
michaelgreshko 7 hours ago 2 replies      
Hey, I'm the journalist who wrote the story. A few things:- The organic-rich film preserving the outlines of scales (so, yes, fossilized skin) is only a few millimeters thick. So Mitchell had to prepare the fossil extremely slowly in order to follow the film through the matrix.- The half-life of DNA is ~521 years at 13.1C, as found in this 2012 paper: http://rspb.royalsocietypublishing.org/content/279/1748/4724The team's model predicts higher half-lifes at truly freezing temperatures, but even at the extreme end, there's no way DNA would survive 110 million years.- The dating on the site is well constrained to ~110 million years old. The fossil was found in the Wabiskaw Member of the Clearwater Formation, a well-dated rock formation in Alberta. The underlying oil sands have been radiometrically dated to 1125.3 million years old. (http://science.sciencemag.org/content/308/5726/1293)
2
blakesterz 8 hours ago 2 replies      
"For more than 7,000 hours over the past five years, Mitchell has slowly exposed the fossils skin and bone. The painstaking process is like freeing compressed talcum powder from concrete. You almost have to fight for every millimeter, he says."

Wow! I had no idea this took so long.

3
mrb 6 hours ago 0 replies      
Back in 2005, researchers found soft tissues in a T-rex fossil (hollow blood vessels retaining their original flexibility, elasticity, and resilience): http://www.rpgroup.caltech.edu/~natsirt/stuff/Schweitzer%20S... IIRC they were cutting the large fossil to transport it, and it is while cutting it that they unexpectedly stumbled upon these soft tissues inside. This discovery was so mind-boggling at the time.

What research has been conducted on this specific 2005 T-rex specimen? The scientific community seemed ecstatic after finding it, but I do not remember any significant discovery made from the tissue.

4
Declanomous 7 hours ago 0 replies      
>Researchers suspect it initially fossilized whole, but when it was found in 2011, only the front half, from the snout to the hips, was intact enough to recover.

Is this because of something that happened a long time ago, or because the mining machine ate the back end of the dinosaur? The article doesn't really make it clear.

5
mrfusion 6 hours ago 1 reply      
I was hoping there might be feathers preserved. Or is that not expected for this species?
6
perseusprime11 7 hours ago 2 replies      
Can we use some of the skin and soft tissue to recreate the dinosaur using CRISPR?
7
redthrowaway 7 hours ago 2 replies      
Could anyone explain why being submerged in water would lead to particularly good fossilization? I would have thought it would do the exact opposite.
8
kyberias 8 hours ago 2 replies      
Title needs correction: With FOSSILIZED skin and soft tissue.
9
lbolla 7 hours ago 5 replies      
10
trollied 6 hours ago 1 reply      
11
malkia 5 hours ago 0 replies      
By know everyone should know the Earth is 6000 years old, on both sides of the coin! What is this nonsense :)
12
Communitivity 8 hours ago 2 replies      
Am I the only one getting Jurassic Park potential vibes off of this? The potential genetic information from this find is amazing.
13
ge96 6 hours ago 0 replies      
My god look at the 'armor' on that thing. Humans are weak! But we can also build bazookas. Thank you asteroid, I would not be here today moving electrons to this server.

edit: does rapid-under sea burial mean it drowned? haha

24
Apple Watch can detect arrhythmia with 97% accuracy, study says techcrunch.com
331 points by brandonb  1 day ago   70 comments top 15
1
robbiep 1 day ago 6 replies      
With due respect to the Cardiogram developers (hi guys) as a doctor I really can't see a huge amount of value in this - my patients who present to emergency departments with paroxysmal AF are all anticoagulanted or on rate control and there has only been one instance in the last 3 years and many thousand patients when someone has presented to emergency and we have had to run through the full spectrum of echo -> anticoagulante -> cardiovert.

To the founders: what do you see as being he end game here? Are you just looking for validation (of the concept in itself, not the app- I use it on my watch)? Is the US market so different that this is particularly useful and cost effective for detection? Do you see this as eventually displaying early warning in the instance of an early warning?

Thanks and I don't mean to denegrate your efforts, but I do see lots of Consumer Med tech as solving a problem that really isn't creating value (i.e. Proliferation of devices,wearables and algorithms that proclaim the ability to help with X but are really marginally helpful at best) and I'm wondering if I'm missing something about the actual medical benefit, or whether what I feel like is true- that they aren't after being a medical device at all but instead are chasing the consumer dollars by making medical claims

2
nonbel 1 day ago 3 replies      
Accuracy is a useless metric for something like this. If you have binary data filled with 97% zeros (ie most of the time it is not arrhythmia) you can use the sophisticated machine learning technique of:

 if(TRUE) return(0)
This will give you 97% accuracy.

EDIT:

I just read the headline earlier. Now after checking:

>"The study involved 6,158 participants recruited through the Cardiogram app on Apple Watch. Most of the participants in the UCSF Health eHeart study had normal EKG readings. However, 200 of them had been diagnosed with paroxysmal atrial fibrillation (an abnormal heartbeat). Engineers then trained a deep neural network to identify these abnormal heart rhythms from Apple Watch heart rate data."

So 1 - 200/6158 = 0.9675219. My method performs just as well as theirs if we round to the nearest percent. This is ridiculous.

3
brandonb 1 day ago 3 replies      
(Cardiogram Co-Founder here)

Let me know if any of you have questions on the study, app, or deep learning algorithm. My colleague Avesh wrote a post with a little more technical detail here:https://blog.cardiogr.am/applying-artificial-intelligence-in...

4
pak 1 day ago 4 replies      
We need to see the full, published study and its methods (particularly around recruitment and exclusion criteria) before we can judge it properly. Until then, the presented statistics about accuracy, sensitivity, and specificity potentially bear no relation to real world usage, if the cohort and data quality were tightly controlled, as you'd expect for an initial study involving the makers of the algorithm. A few other thoughts:

1. Even at 98% sensitivity and 90% specificity [0], which I don't think would hold up with real world usage in casual, healthy users, if AFib has a prevalence of roughly 2-3% [1] then by a quick back of the envelope calculation a positive test result is still 5 more likely to be a false positive than a true positive. With those odds, I don't think many cardiologists are going to answer the phone. You'd still need an EKG to diagnose AFib.

2. There is huge variance among people's real world use of wearable sensors, and also among the quality of the sensors. (Imagine people that wear the watch looser, sweat more, have different skin, move it around a lot, etc.) You'd likely need to do an open, third-party validation study of the accuracy of the sensors in the Apple Watch before you can expect doctors to use the data. My understanding is that the Apple Watch sensors are actually pretty good compared to other wearable sensors, but I don't know of any rigorous study of that compares them to an EKG.

3. Obviously, this is only for AFib. AFib is a sweet corner case in terms of extrapolating from heart rate to arrhythmia, because it's a rapid & irregular rhythm that probably contains some subpatterns in beats that are hard for humans to appreciate. As othersincluding Cardiogram themselves [2]have pointed out previously, many serious arrhythmias are not possible to detect with only an optical heart rate sensor.

[0]: https://blog.cardiogr.am/applying-artificial-intelligence-in...

[1]: https://www.ncbi.nlm.nih.gov/pubmed/24966695

[2]: https://blog.cardiogr.am/what-do-normal-and-abnormal-heart-r...

5
peterjlee 1 day ago 0 replies      
For those who's going to mention Bayes' Theorem regarding medical tests, here's a link to save you a Google search

https://en.wikipedia.org/wiki/Bayes%27_theorem#Drug_testing

6
chris_va 1 day ago 0 replies      
If I am reading this correctly, there were 6,158 patents with ~200 true positives, so approximately 3% of the population. 98.04% sensitivity (recall) and 90.2% specificity (%true negatives) leads to...

~4 false positives for each true positive.

That isn't bad, all things considered, but still a long way to go.

7
openasocket 1 day ago 2 replies      
They aren't clear what they mean by "97% accuracy". Does that mean 97% of people with arrhythmia are correctly diagnosed, or 97% of people are correctly diagnosed or not-diagnosed? If it's the latter, it's not very helpful at all. The number of people in the general population with arrhythmia is significantly less than 3%, so if this Apple Watch test says you have arrhythmia it is far more likely to be a false positive than a true positive.
8
zobzu 1 day ago 0 replies      
I've arrhythmia and I can see it on my Garmin watch by looking at the data, as well as .. my phone.

It doesn't help much though, because I don't know if its good or bad (well, actually I know but not because of the watch data).Doctors are still needed for this, and generally that includes a bunch of controlled tests and people listening to your heart while also gathering data (similarly to the watch albeit with a more precise apparatus)

I guess it can help to tell people they might wanna see a doctor if they haven't though.

9
jasonmp85 1 day ago 1 reply      
Probably want to update the title, which still refers to the more general 'arrhythmia'. It sounds like Cardiogram's work is mostly focused on afib?
10
paul7986 1 day ago 0 replies      
I'm seeing more and more people wearing Apple watches and more similar PR stories like this.

I saw one story that talked about a guy whose car flipped and he was unable to reach his phone but thanks to his watch he was able to call for help.

11
dennyabraham 1 day ago 0 replies      
Despite all the caveats around this work, early detection is one of the reasons HR trackers are a great investment. You can't manage what you can't monitor
12
caycep 1 day ago 0 replies      
the one question I have is - how much better is the DNN vs. just simple rhythm analysis, i.e. periodicity, etc.
13
revelation 1 day ago 1 reply      
Always great to have percentages reported for a sample size <100.
14
deepsun 1 day ago 1 reply      
Am I the only one who thinks that 97% accuracy (1/30 chance of an error) is very bad in medical diagnosis?

At least I'd expect something like 99.9% accuracy (1/1000 chance of an error) when someone gives me my own heart diagnosis.

15
threeseed 1 day ago 1 reply      
This is really great news but I wish it was actually available. The Apple Watch right now is largely useless.

It captures all of these health metrics but then does absolutely nothing with it. It really is desperate for some actual killer health use cases.

25
Pulling JPEGs out of thin air (2014) lcamtuf.blogspot.com
375 points by shubhamjain  1 day ago   39 comments top 10
1
pjc50 1 day ago 1 reply      
A couple of years ago a colleague built this for H264/5: http://www.argondesign.com/products/argon-streams-hevc/

It's not just a fuzzer, it guarantees to hit every part of the spec (subject to what "profile" you're implementing). It's not free, it's a product for sale to implementers of HEVC for verification purposes.

2
acdha 1 day ago 2 replies      
Previous discussion: https://news.ycombinator.com/item?id=8571879

Since then the bug list has grown impressively: http://lcamtuf.coredump.cx/afl/#bugs

3
vwcx 1 day ago 1 reply      
At first I thought that this title was referring to the artist who intercepted satellite internet transmissions and re-constituted what JPGs were contained in the requests: http://time.com/3791841/the-green-book-project-by-jehad-nga/
4
exabrial 1 day ago 1 reply      
This is mind-bogglingly cool and simple. I've been looking for a "fuzzing for noobs" article and tool for a long time!
5
some1else 1 day ago 4 replies      
So afl-fuzz could technically be considered the most advanced universal software license key generator? :) I suppose the days are numbered for offline validation.
6
derefr 1 day ago 1 reply      
Ignoring the actual "fuzzing" going on herewouldn't it be possible to use this approach (or something like it) to make a sort of 'universal wire-protocol auto-discovery-and-negotiation library'? Picture a function like the following:

 def client_for(arbitrary_socket) valid_requests_discovered = repeatedly_fuzz_probe(arbitrary_socket) valid_request_formats = cluster_and_generalize(valid_requests_discovered) peer_idl = formalize(valid_request_formats) client_module_path = idl_codegen(peer_idl) compile_tree(client_module_path) require(client_module_path + "/client.so") end
I imagine that if we're ever doing the Star Trek thing, gallivanting around in starships encountering random alien species with their own technology bases, this would be the key to anyone being able to meaningfully signal anyone else.

7
xhrpost 1 day ago 1 reply      
Anyone having any luck compiling libjpeg-turbo with instrumentation?
8
amelius 1 day ago 2 replies      
What if the probability of generating an image that is much larger than what would fit in memory would be greater than generating a normal-sized picture? Would the fuzzer be able to produce something in practice?
9
diyseguy 1 day ago 3 replies      
god I wish this would build on cygwin
10
JabavuAdams 1 day ago 0 replies      
Crazy how quickly Gabor-like patterns start to show up.
26
Maintainers make the world go round: Innovation is an overrated ideology projectm-online.com
372 points by ForHackernews  4 days ago   168 comments top 27
1
whack 4 days ago 5 replies      
You could say the same thing about any non-glamorous/lucrative position.

"Garbagemen make the world go round. Without them, we would drown in our own filth"

"Nannies make the world go round. Without them, half the workforce would be stuck at home"

"Auto mechanics make the world go round. Without them, we would have no way of getting places"

Ultimately, all such arguments are inane and pointless because every single job that exists in society

A) is important to the people paying for it

B) has wages that are based on both the importance of the job and how easy it is to find someone capable of doing it

C) The idea of glamorizing any job, and allowing yourself to be influenced by a job's glamor-rating, is just superficial drivel. Don't judge yourself or others by the job listed on their business card. If you feel the need to judge someone at all, judge them by the impact that they, as an individual, are making in the world.

2
wjke2i9 3 days ago 1 reply      
> Ive actually felt slightly uncomfortable at TED for the last two days, because theres a lot of vision going on, right? And I am not a visionary. I do not have a five-year plan. Im an engineer. And I think its really I mean Im perfectly happy with all the people who are walking around and just staring at the clouds and looking at the stars and saying, I want to go there. But Im looking at the ground, and I want to fix the pothole thats right in front of me before I fall in. This is the kind of person I am. - Linus Torvalds @TED[1]

[1] https://www.ted.com/talks/linus_torvalds_the_mind_behind_lin...

3
Animats 4 days ago 5 replies      
"If the President had picked me to predict which country [in postwar Europe] would recover first, I would say, 'Bring me the records of maintenance.' The nation with the best maintenance will recover first. Maintenance is something very, very specifically Western. They haven't got it in Russia. If I got in there in the warehouse, let's say, and I saw that the broom had a special nail, I would say, 'This is the nail of immortality.'" - Eric Hoffer
4
draw_down 4 days ago 14 replies      
You might say I've observed this many times in my career. I think the best move career-wise is to be one of the people who makes the new thing. Those who clean up after them, bear the brunt of their design flaws and careless mistakes, will never be recognized, appreciated, or remunerated as well. At least in my experience.

Ask yourself this: who is the most famous maintainer you can think of? (Not someone who devised an innovation and then maintained it - pure maintenance)

5
maehwasu 4 days ago 3 replies      
This is like saying LeBron James is less important than Cleveland's role players, because you need five people on a side to have a basketball team.

The question isn't who's "necessary", since everyone who is necessary, no matter in what way, is necessary; necessity is a tautology.

The question is whose contributions are more replaceable.

6
shriphani 4 days ago 0 replies      
There is probably a distribution here that matters - without maintenance there is no foundation for innovation, without innovation there is no motivation to maintain - man wants to produce and consume newer ideas, materials, tools, items etc.

There's a great piece in the Lapham's quarterly about maintaining NYC's infrastructure and how without any maintenance, NYC would be replaced by forest cover within 200 years. Can't find it right now but it is a great read trust me.

7
rusk 3 days ago 0 replies      
This reminds me of a great quote, attributed to Thomas Eddison:

"Opportunity is missed by most people because it is dressed in overalls and looks like work"

https://www.brainyquote.com/quotes/quotes/t/thomasaed104931....

8
spc476 3 days ago 0 replies      
You know, if Hollywood made the blockbuster movie "Infrastructure" [1], maintenance might be views as a "good thing."

[1] https://www.youtube.com/watch?v=Wpzvaqypav8&t=17m14s

9
theprop 4 days ago 1 reply      
No, it's precisely the opposite! Maintainers (using whale oil for energy) would have driven every whale to extinction and left the world without an energy source a century ago. Maintainers (using horses for travel) would have drowned Manhattan in horse shit a century ago.

Innovation is if anything under-rated and under-funded and under-supported. The homes of hundreds of millions of people and energy itself is threatened by depleting fossil fuels and global warming...and some of the major efforts to stop this have depended on effectively "insane" entrepreneurs like Elon Musk...not a smart system! All the while hundreds of billions of dollars in health care costs for just say unnecessary tests flows to negative value addition maintainers.

Maintainers mostly either conservatively follow & accept or exploit the current system. It's innovators who've driven down the cost of lighting your home to a few hours of income or the ubiquity & cheapness of books & information (perhaps to the detriment of wisdom but that's another story) to stopping war through protest to ending non-man-made famine.

10
mentat 4 days ago 1 reply      
On a lighter note, Pump 6 from "Pump 6 and Other Stories" (https://smile.amazon.com/dp/B0071CX7V4/) is a fun take on a world that has become too good at making things that don't need maintenance.
11
kbutler 4 days ago 0 replies      
Innovation changes the world, maintenance keeps it going.
12
upofadown 4 days ago 2 replies      
Obviously the thesis is true. Maintenance is crucial while innovation rarely changes anything substantive...

I think that is the reason that we hold innovators in high regard. People are very bad at it and it happens so infrequently. We rarely get the right person when we hand out credit so such idolization is usually meaningless, but I suppose in the long run that is not important. We have this irrational need to attach a person to the idea.

So we end up failing to properly credit any of the people that make and keep our civilization...

13
shade23 3 days ago 0 replies      
Wouldnt maintenance lead to innovation analogous to necessity being the mother of invention?

Most of the inventions came up as an easier or better way of doing something which was the current maintainer(to be innovator?) decided to rework/create .

And if the author talks about only maintenance where no development can be done(even those that make the life of maintainer easier).IMHO , Maintenance procedures should also be constantly improved and hence that would lead to innovation.

14
frabbit 3 days ago 0 replies      
I found the thesis interesting and plausible: that innovation is fetishized.

But on a slight tangent I wondered whether the "innovation" that they complain about is a particular variant, one that we're all familiar with here: a pseudo-libertarian start-up variant

 "Innovation ideology is overvalued, often insubstantial, and preoccupied with well-to-do white guys in a small region of California"
It seems easy to argue against this tired representative of innovation.

By contrast there are those that would argue that most of the major technological and scientific gains have arisen, not from these VC hype-machines, but from large-scale state planning and investment. One of the best expositions of this argument is from economist Mariana Mazzucato: https://www.youtube.com/watch?v=yPvG_fGPvQo

15
cgio 3 days ago 2 replies      
Or as I like to say, "incompetence makes the world go round". Extracting little glimpses of functionality out of a chaotic mess is a challenging, at times satisfying and definitely valuable exercise that keeps many people at work...

Maintenance is not the opposite of innovation, it is the opposite of good design.

16
danielam 3 days ago 0 replies      
There's an analogy here vis-a-vis tradition/progress. In order to be reasonably sure that a change is an improvement, you must understand what you're changing and how. To borrow a Chestertonian example, if you encounter a fence and you don't know why it's there, find out why before you remove it. Maintainers are in the best position to understand the impact of making changes, and because of that, they're able to function as either advisers or as "innovators" by knowing where improvements can be made and having the knowledge to understand why they're improvements.
17
madenine 3 days ago 0 replies      
The maintainers! I know a couple people connected to this group - heard great things about their 2nd conference last month.

The premise is great. From Russel's article on Aeon:

"We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, What is at stake if we move scholarship away from innovation and toward maintenance? Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovations shadow."

18
rdiddly 4 days ago 0 replies      
What about Improvers? Not a word about us? You can innovate while maintaining.
19
acchow 4 days ago 0 replies      
Ridiculous. Maintenance is momentum. Innovation is boost.

Each boost is miniscule, but our momentum is enormous after thousands of years of human development so of course maintaining our momentum gets us incredibly far.

20
pc2g4d 4 days ago 0 replies      
I'd say there's no fine line between maintenance and innovation. Many innovations arise in response to the pains of maintenance.
21
carsongross 4 days ago 5 replies      
Related video by Jordan Peterson, on how liberals and conservatives need one another because liberals (high trait openness) innovate, but conservatives (high trait conscientiousness) maintain things:

https://www.youtube.com/watch?v=3Ho5VZp_ps4

22
golergka 3 days ago 0 replies      
OT, but this site looks just great with Javascript turned off (as I usually do with all the "trendy"-looking longreads, as they tend to be processor hogs). Even animations on the title screen. Awesome front-end job.
23
deskamess 4 days ago 0 replies      
Does Edisons 1% inspiration (innovation) and 99% perspiration (maintenance) apply?
24
lowbloodsugar 4 days ago 0 replies      
If maintaining these things is important, might we have to wonder how they came to be?
25
sammyo 4 days ago 1 reply      
Maintainers make the world go round, innovators MAKE the world.
26
mlindner 4 days ago 0 replies      
I just have to laugh at this title. It's delusional.
27
sebelk 4 days ago 2 replies      
And what about devops? With OpenStack and et al, aren't gurus telling us that maintainance (adminitracin) doesn't exist no more?
27
Amazon Echo Show amazon.com
389 points by metaedge  3 days ago   443 comments top 66
1
eclipxe 3 days ago 6 replies      
People are missing this:

"With the Alexa App, conversations and contacts go where you go. When youre away from home, use the app to make a quick call or send a message to your familys Echo. Alexa calling and messaging is freeto get started download the Alexa App."

Alexa is now in the messaging and communication game.

https://techcrunch.com/2017/05/09/amazon-enables-free-calls-...

2
pwaivers 3 days ago 3 replies      
A few thoughts:

- This is way less creepy-looking than the Amazon Look (https://www.amazon.com/Echo-Hands-Free-Camera-Style-Assistan...), but it is actually very similar.

- It is great to add a screen to the Echo. Just more feedback on interacting with it, and possibility to watch YouTube, Netflix, etc. casually.

- It doesn't have the same cool minimalism as the Echo. The Echo sits on my counter and looks nice when not in use. I think this one looks much clunkier.

- I definitely want to try one.

3
mholmes680 3 days ago 5 replies      
Its interesting to see how fast Amazon can come to market with these new hardware pieces. I guess the fallout of the Amazon Phone at least had some lessons learned in hardware suppliers, etc... I realize they're throwing hardware out there prior to seeing what the software can do with it, but I think its necessary to get people locked in.

I like their approach from the business perspective. Give the people a voice controlled speaker. Give them a remote! Now, give them a voice-controlled camera! Now, give them a voice-controlled screen! Soon, give them <insert novel sensor> and let them go hands free! Rinse-repeat.

4
silvanojr 3 days ago 4 replies      
I was battling back and forth FOR A MONTH with their skill certification approval team for a skill update that would allow customers to call people by name, where in the first version it was only by phone number.

They would fail the certification because apparently people didn't know how to test, or used fake numbers to make phone calls and complained the call would not connect, or the certificate validation (that was working before) would fail, etc. All sorts of things. VERY frustrating process. I wouldn't make any change, submit the skill again for certification and get different results.

Now they announce their own calling feature, a week after finally approving our update.

5
justcommenting 3 days ago 2 replies      
The Amazon Echo Show seems very much like a telescreen, straight out of Orwell's 1984: https://en.wikipedia.org/wiki/Telescreen
6
verytrivial 3 days ago 3 replies      
I must be one of those old farts who prefers privacy over convenience.

I do not want what amounts to an always-on black-box surveillance device in my home and I simply do not understand why other people think it is okay. I honestly don't.

Down with this sort of thing!

7
FLGMwt 3 days ago 15 replies      
Any echo owners feel like they would get additional value out of this?

90% of my interaction with my standard echo has been "what's the weather".

Even when I want visual controls for music, I'd rather pull out my phone than walk over to a screen.

8
ejtek 3 days ago 3 replies      
It continues to surprise me how far ahead of them Apple is letting Amazon/Google get in this area. I've always been a big fan of Apple (despite their closed ecosystem), but have to admit that Amazon is seriously outplaying them on this front. Hopefully Apple surprises me and comes up with something even more innovative that can compete.
9
colemannugent 3 days ago 6 replies      
I feel like this entire product could be a Chromecast-esque dongle that connects to a TV. Having a personal dashboard would actually be quite useful, but this seems like they want to sell appliances not experiences.

Maybe they've gone with this form factor because of the 2x 2" speakers? But why would I want that when it could be plugged directly into my home audio setup?

Or maybe it's so they can include a touchscreen? But I thought the whole point was hands-free conversational interaction?

I guess I'm missing the point of this. Why would I, as a normal consumer, get this instead of a regular Amazon Echo?

10
imartin2k 3 days ago 2 replies      
Maybe that's just me, but based on the photos, this device looks quite ugly - which matters for a gadget that people put inside their homes, doesn't it? The "original" Echo has a futuristic design. This one feels more like created in 70s or 80s.
11
cphoover 3 days ago 0 replies      
People here are really missing the point... This isn't another ipad it's a different way of interacting. It's not just video message either, it's a new human interface for interacting with software. You can communicate with someone and get suggestions at the same time. Think conversing with a friend and having Alexa aid in the discussion.

Friend 1: Where do you want to go to the movies tonight? ..Friend 2: I dunno Alexa have any good suggestions?Alexa: Star Trek is playing x:00 at X theatre. Things of this nature.

12
danso 3 days ago 1 reply      
I'm not willing or interested enough to enable voice activation (Siri) on my phone or desktop, but thought Echo would be nice to have as a music player. The voice recognition is so reliable -- not just the NLP, but the mic array (unlike trying to activate Siri on the iPhone) -- that it's converted me to a true believer in voice interfaces, at least for simple tasks, such as playing music, turning on NPR, and activating timers and alarms. I do have the Fire stick connected to a projector but I've definitely longed for the ability to navigate YouTube or HBO on a tablet-like device with Alexa (again, not just the NLP, but the mic array, which Fire tablets don't have)

This seems like a nice step in that direction but I've been spoiled by the low cost of the Echo Dot, which when it's on sale is so cheap it can be a stocking stuffer. I don't think I could pay $229 for the first generation version of the Show, but will likely get its cheaper, more advanced iterations.

13
noonespecial 3 days ago 3 replies      
Why does it have to be a tiny self contained screen? Until I can say "Alexa, on the main view screen" (right after "Alexa, Earl Grey, hot" of course), we've got progress to make.

Which reminds me, I've got a Keurig to hack...

14
tetrep 3 days ago 6 replies      
I don't see the value in this over a tablet with a stand. The tablet is portable, can do more things, and already exists in many people's homes.
15
voltagex_ 3 days ago 1 reply      
It's the Chumby for 2017, with less freedom to hack.
16
UXCODE 3 days ago 4 replies      
In the United States, what is the need for speech recognition devices? At least in Japan and China, speech recognition technology does not reach practical level and needs are small.
17
yalogin 3 days ago 5 replies      
Great now the echo will record all video as well and "anonymize" it and use it to improve their systems. This class of devices are the most puzzling to me. People know their value proposition is to record everything but then keep buying them. I keep waiting for the day when the scales tip in favor of privacy but that never happens.
18
test6554 3 days ago 0 replies      
If they are going to enable calling, I sincerely hope they learn from the current phone spam and email spam mess and don't let just anyone call you at any time.

Ideally, you could authorize people to call you by giving each person/entity a different token that authorizes them to call you. Then if that person/entity sells the token to 3rd parties, you not only know who sold you out, but also you have the ability to revoke that token easily.

19
trapperkeeper79 3 days ago 3 replies      
Amazon is killing it in IoT/Smart Home. However, IMHO, they are making a bit of mistake by not allowing developers to monetize their platform (at least the last time I checked). There were also certain device functions that apps could not utilize (e.g. programmatically mute and unmute). I suspect they'll have a wall garden approach to their new Echo devices too ... if this was open, they'd win it all (again, just my opinion).
20
dharma1 3 days ago 0 replies      
The main thing that annoys me about Echo is that the knowledge graph is so poor. I can only choose from a limited amount of things to ask the damn thing, WikiPedia or start installing 3rd party skills.

I wish I could install OK Google on Echo.

Edit - looks like you can, with a custom skill - https://www.youtube.com/watch?v=PR-LVPMU7F4

21
sergiotapia 3 days ago 0 replies      
Looks like something out of Robocop or Total Recall. I'm not sure if I'm excited or terrified! Let's say both.
22
hungtraan 3 days ago 0 replies      
I honestly think that with the use cases the Echo Show would be much more useful had the static structure has a rotating base, which allows the Echo Show to rotate to the source of voice command (disable-able via setting for privacy concern). That would allow ultimate versatile use for its screen to offer the same hands-free experience.
23
rrggrr 3 days ago 0 replies      
This was the direction I expected Apple to take prior to Jobs passing. It seemed the rumored Apple TV would combine Siri with traditional television. Apple faces serious threats across the entertainment spectrum, from content to device.

Everyone speculating on Apple acquisitions should be considering a Sony or LG buyout. I own stock in neither.

24
dafty4 3 days ago 0 replies      
Brushed aluminum or some other color scheme would look better. Plastic black matte looks cheap and meh.
25
JimmyAustin 3 days ago 1 reply      
Interesting choice going with an x86 chip. This could potentially be a hackers dream if you got Linux running on it.
26
GrumpyNl 3 days ago 2 replies      
Why would you wanna have a electronic spy at home is beyond me.
27
chaostheory 3 days ago 0 replies      
It would be nice to control FireTV with an Echo. Still waiting.
28
wppick 3 days ago 0 replies      
Eventually, with the internet of things, there will need to be a "home brain" type device to control all of the devices in your house. The company that holds that position of controlling what devices can work with others will have a lot of market power.
29
kayoone 3 days ago 0 replies      
Love the concept of the Echo, however i don't see too much value in a screen, as for most tasks you'd need that for it's usually worth the effort to pull out the phone since you are also not bound to a specific location.
30
rtechnologies 3 days ago 0 replies      
I developed this same thing 6 months ago. Setup and commands are a bit cumbersome due to being 3rd party but all you need is an Echo device and Android device with the Echo Sidekick app. Does everything the Show does except voice calls but you can send messages through Echo devices to other devices with the SideKick app. https://play.google.com/store/apps/details?id=com.renovotech...
31
malchow 3 days ago 1 reply      
What does a Sonos user do when he has already deployed a dozen Sonos speakers throughout his house? Will there ever be a microphone-only Echo device that can link into a Sonos system?
32
Boothroid 1 day ago 0 replies      
33
vthallam 3 days ago 1 reply      
This is more like an iPad with a better Siri. I guess talking to parents, watching child cams are the target audience for this. A device which sits in living room or bed room need not show me CNN in there.
34
mycodebreaks 3 days ago 0 replies      
why should a Fire tablet paired with speakers not be able to do this?

I am not against any category of products, but as a person who likes to own and manage fewer devices, I like my devices to be versatile.

35
amelius 3 days ago 0 replies      
> If you want to limit your interaction with Alexa, simply turn the mic/camera button off.

Of course, that button is a handy indicator for Amazon to know when to record stuff :)

36
cturitzin 3 days ago 0 replies      
This is exciting for healthcare use-cases. Simple stuff like video clinician checkups or remote monitoring such as tracking and recording physical therapy progress.
37
pound 3 days ago 0 replies      
Now they're much closer to solving 'smart home assistant' online shopping. Communication only via voice results in two uncomfortable options: either you're blindly believing that you'll get best price/option ("order xyz") or you may stuck in very slow listening of options (try to read search result list). That barrier will be stepped over with this little screen enhancing shopping experience, if needed.
38
relyks 3 days ago 0 replies      
If you can place multiple of these in a house and use them all together as an a/v intercom system, that'd be by a far killer feature. E.g. you can talk to your child who's in the basement or talk to coworker at another cubicle
39
coding123 3 days ago 1 reply      
They used the same picture of Dad seeing his grandchild like 3 times, they need to push out different pics.
40
scotchio 3 days ago 0 replies      
Love that Amazon is throwing a lot of options out there.

Only wish the outer shell on this one looked a bit nicer / slicker.

Really want an "Alexa" type replacement for smoke detectors. Location seems perfect for speakers / music in a house.

Scary to think that privacy for average consumer is basically dead.

41
kasperset 3 days ago 2 replies      
Looks like a mini tablet? Why cannot tablet be used for the same purpose? Perhaps audio capability?
42
LeoNatan25 3 days ago 0 replies      
Its amazing how much of a difference a marketing video makes. This and the Echo Look are not at all that dissimilar, yet one appears to be friendly and essential, while the other is creepy as hell.
43
davidcollantes 3 days ago 1 reply      
From a user's perspective, I think there are too many Echoes. It makes it hard to decide which to get, especially for those who can only afford (or want to deal with) one. Too much fractioning.
44
vineet 3 days ago 2 replies      
The video calling capability seems especially neat - I wonder if they will interoperate with Facetime, Google Duo/Hangouts, and other video calling protocols. It will make our lives so much easier.
45
dafty4 3 days ago 0 replies      
Brushed aluminum would look nicer. Plastic black matte is bleh.
46
PascLeRasc 3 days ago 1 reply      
Are they just "announcing" these devices by putting them up for sale? It feels like we need an Echo keynote to learn about their direction and they could get a lot more hype that way.
47
jlebrech 3 days ago 0 replies      
I could see the use in the kitchen as ask alexa to look up recipes or turn the page while my hands are greasy or covered in flour.

this functionality will probably need a custom firmware tho.

48
MarketingJason 3 days ago 0 replies      
IMO Amazon should focus on enabling and assisting the development of more skills and integrations for echo devices before pushing out newer models or adding features.
49
themtutty 3 days ago 0 replies      
Their demo video is cringe-worthy. I understand that you're also marketing to non-technical folks, but it's like a film from grade school.
50
archeantus 3 days ago 0 replies      
Looks great. But how about that mural?? The main takeaway I had from that video is I need to pick up sponge painting in my kid's rooms.
51
slackoverflower 3 days ago 4 replies      
What is Amazon's long term strategy with all these devices with the main feature still being voice?
52
Kiro 3 days ago 0 replies      
Perfect for viewing and browsing recipes and recipe videos without having to touch the screen.
53
CreepyGuy101 3 days ago 0 replies      
I have to ask why these things aren't gesture activated ...
54
Animats 3 days ago 0 replies      
Not only can you watch it - it watches you!
55
agumonkey 3 days ago 0 replies      
Kinda merging tablet/webcam + alarm clock usage. Not bad.
56
pmcpinto 3 days ago 0 replies      
So this is kind of a tablet, but with voice as main UI
57
pateldeependra 3 days ago 0 replies      
This is similar to a tablet kept in my room.
58
mandeepj 3 days ago 1 reply      
A better alternate to Sony dash which got abandoned
59
salimmadjd 3 days ago 0 replies      
iPhone for grandparents? or Echo for grandparents.

For me this product makes sense for elderly in the digital age to keep them connected.

60
gcb0 3 days ago 0 replies      
so amazon is trying to corner the market of tablets-junior-cant-take-to-the-restroom market?
61
kensai 3 days ago 0 replies      
"Alexa, submit this comment to HN"

It works! :D

62
staz 3 days ago 2 replies      
"Alexa, show me the kids' room."

Am I the only one that's creeped up by that?

63
CreepyGuy101 3 days ago 0 replies      
Well, that just got creepy. As if security wasn't an issue before.
64
bettyx1138 3 days ago 0 replies      
the video seems like a parody
65
uptown 3 days ago 5 replies      
The telescreen received and transmitted simultaneously. Any sound Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug into your wire whenever they wanted to.

-Orwell, 1984

66
ganfortran 3 days ago 1 reply      
Here is a crazy idea: What didn't Amazon make this its own Nintendo Switch? A stand with a detachable tablet? Won't this be even better?
28
How to shoot on iPhone 7 apple.com
361 points by waqasaday  1 day ago   194 comments top 28
1
swah 1 day ago 2 replies      
I'm an Android user for 10 years now but.. every time one of those websites/ads appears its always from the same company.

Apple is so ahead of the other companies on actually promoting their stuff.

2
robertwalsh0 1 day ago 3 replies      
These bite-sized video that show the real UI are a great idea. It seems obvious now in retrospect for them to execute something like this - I know so many iPhone shooters that will love this resource.
3
terrywang 1 day ago 7 replies      
I've been an Android user since Nexus S (Android 2.3), Nexus 4, Nexus 5, all the way up to Nexus 5X (7.1.x).

The LG made Nexus 5X bricked (suddenly rebooted and never boots again) at 13 months old due to a known hardware fault, there is a lawsuit against LG in California https://arstechnica.com/tech-policy/2017/04/lg-bootloop-defe...

My Nexus 5 had power button issue (it automatically switch off, sigh...) Nexus 4 build quality was bad, too.

I lost 3-4 days worth of photos (was travelling and didn't backup). Luckily I was able to boot once (out of 40 attempt) after it bricked by giving it some heat (thermal issue). Backed up the photos before it quickly died again (there are videos on YouTube to fix the problem using heat gun), finally I decided I've had enough with crappy Nexus phones (I am fine with Android though). Pixel / Pixel XL is more expensive than iPhone 7 or plus, really cannot justify.

Finally I've moved to iOS again, after 6-7 years as I decided not to manually flash phone OS and root any more (I have my Arch Linux workstations and MBPs). My requirements for a phone comes down to:- Reliable hardware and high quality finish- Strong hardware based/accelerated encryption- Fingerprint recognition and authentication- Stable and Swift OS (greatly improved with APFS, regained faith in Apple)- Good camera - the best camera is the one with you- Long battery life (ideally support both fast charge and wireless charge)

The iPhone 7 Plus meets all. So far I've been on it for 2 weeks. iOS definitely has better attention to details and better UX at the cost of more control over the OS, a lot of apps on Android that doesn't support fingerprint authentication support that on iOS, I was surprised to find out.

The only complains are that 1. there is no HN client as good as Materialistic (let me know if you know any...) 2. no strongSwan native client (Gboard has introduced more language support so it's no longer a big concern). Other than that I am happy.

4
hprotagonist 1 day ago 3 replies      
Huh. I did _not_ know that you could force touch and slide to manually adjust exposure.
5
rubatuga 22 hours ago 0 replies      
Wow, these are really great tips for anybody thinking that their photography is a bit stagnant. Thumbs up to Apple for providing a simple, creative, and 60fps <3 videos that could be generalized to any camera.
6
lflux 1 day ago 12 replies      
I'm missing the "How to shoot one-handed at a concert"[0]. I'd love to have zoom not be a pinch only

[0] since my other hand is holding my drink

7
philipjoubert 18 hours ago 0 replies      
In case anyone is curious - these videos were shot in and around Cape Town, South Africa.
8
custos 1 day ago 1 reply      
The title made me think it was someone shooting an iPhone with a firearm...
9
nsxwolf 1 day ago 3 replies      
I love Portrait mode and have taken some photos with it that I am happy with, but really, it would be nice if they also showed us how they professionally lit that cafe, because you are never, ever going to take a photo that looks like that in the real world.
10
duncans 18 hours ago 1 reply      
Should be made more obvious that the actual useful content of the page needs to be scrolled down to. A more useful link (that scrolls to the content) to give friends family might be https://www.apple.com/iphone/photography-how-to/#section-car...
11
lifeformed 1 day ago 4 replies      
Nothing happens when I click on any of the play buttons. Is anyone else having this problem? This isn't the first time an apple.com site didn't work correctly for me at all...

Do I have to be on a Mac or iPhone to view it or something?

12
nfriedly 10 hours ago 0 replies      
I like that the action shot one has nothing to do with careful timing - just hold down the button and pick out a good photo later.
13
juandazapata 14 hours ago 2 replies      
Anybody knows what's the genre of the music used in the videos?
14
naviehuynh 1 day ago 1 reply      
Funny how they still use the old iOS theme on their player's seek bar.It has been years since the introduction of new UI and I still find it suck. Look at the share button on "Edit selfie" video, it just feels like a toy design project of some students.
15
dutchbrit 20 hours ago 2 replies      
Seems like I don't have the option to choose portrait, nor do I have the option to change exposure, is this a guide for all iPhone 7's or just the iPhone 7 plus?!
16
sideproject 1 day ago 2 replies      
Is there a technology to.... say.... zoom-out more? For example, when you take a selfie, sometimes things are too close.. I wish I could 'somehow' zoom-out, but I wouldn't have a starting clue how that would be achieved.
17
saurik 21 hours ago 1 reply      
In "how to shoot a group portrait" (or all of them, honestly ;P), one of the steps should be "turn the damned camera 90 so everyone is actually in the frame".
18
rahilsondhi 1 day ago 2 replies      
This is amazing. I wish the selfie video would talk more about angles. My girlfriend tells me my angles are always wrong :/

Also, Google should do this for Pixel.

19
peterburkimsher 1 day ago 1 reply      
Could the two cameras be used to get 2 different perspectives, and therefore remove power lines or other imperfections?
20
tptacek 1 day ago 1 reply      
I did not know about burst mode!
21
Tanegashima 1 day ago 1 reply      
Apple always comes up with great soundtracks to their ads/infomercials. Sometimes made by themselves/exclusively for them.

I wish they would give developers a royalty free library for our App demo videos.

22
jamesmccann 1 day ago 1 reply      
Do you need a video to tell you to switch to the front facing camera to take a selfie?
23
mandeepj 12 hours ago 0 replies      
Shooting a panorama on iphone is neither easy nor intuitive. Unfortunately, they did not even explain that in these series.
24
qrbLPHiKpiux 16 hours ago 0 replies      
All vertical shots. Communicating to the masses. Market to the masses, live with he classes.
25
wand3r 1 day ago 0 replies      
Look; I'm a long time Apple user and they have a great camera...But seriously; this is like their core thing for several years. They need to implement a rule that requires scaling performance and battery life at parity with camera improvements...
26
ShirsenduK 21 hours ago 1 reply      
There was a time when Apple products were user friendly but now we need guides. They said technology would ease our lives. I'm disappointed with this shift at Apple where they want to look and sound cool rather than be useful.
27
dustinmoris 19 hours ago 5 replies      
Luckily I have a Google Pixel, the only thing I need to know to shoot a perfect photo, in fact an even better photo than an iPphone 7 could ever take, is:

1. Open Camera app

2. Take photo

It is really that simple, no jokes :)

Maybe I should create a website with these instructions.. mhh..

28
phdify 16 hours ago 0 replies      
This is a stellar example of marketing gone wrong by a company that's out of touch with a vast number of their customers https://phdify.com/essay-types. Yet it simply doesn't matter. Apple's massive revenue stream and its astronomical cash hoard makes them impervious to various failures here and there. It's great to be Apple
29
Startup Graveyard History Shouldn't Have to Repeat Itself startupgraveyard.io
356 points by tilt  1 day ago   102 comments top 35
1
jstandard 1 day ago 10 replies      
An interesting idea. The trouble here is that analysis like this tends to document the symptoms of failure as if they were the causes. The actual causes are more likely to be very complex, based on circumstances unique to the startup/people running it, require deep insider knowledge of the company, and in some cases be things people aren't willing to admit or recognize.

It's the flipside of a similar problem in analyzing why companies are successful. [1]

The little disclaimer at the bottom really says it all.

The attempt is a noble one, marred by data and insight quality issues. I think it could be useful if the site can source insightful analysis from founders/insiders and make it easy to search by market/product category. Perhaps even adding a badge to information which came from a founder.

[1] http://www.tomorrowtodayglobal.com/2011/12/09/good-to-great-...

2
vivekd 1 day ago 1 reply      
Lots of startups fail for timing reasons. There seem to be alot of startups that maybe just started at the wrong time. One example is QBotix that provided robotics for the solar industry, probably not much of a market a few years ago, but in the coming few years that should explode.
3
inputcoffee 1 day ago 0 replies      
Potentially a great idea. I have wanted to see data on the reasons startups failed.

One of the problems with this sort of thing is that it is not clear what the category of the failure should be. In a sense, the vast majority of failures is: no sales.

But is this because of "product-market fit" (they just didn't want it), or "customer acquisition costs" (couldn't get the word out), or "lack of runway" (can't get the word out fast enough).

That's why you almost need some hybrid of story telling and data so you can compare what Balaji Srinivasan calls "the idea maze."

I wish I could compare the series of decisions in some manner, and not just read it as a story.

4
blakesterz 1 day ago 3 replies      
Anyone else reminded of Fucked Company? The website or the book :-)
5
AndrewKemendo 1 day ago 1 reply      
Every year the idea of a startup post mortem site or study comes up and as always goes nowhere because they can't actually draw a causal relationship and tell the story of why the company in question rolled up.

Besides the fact that there is no incentive, in fact there is negative incentive, for key players to contribute to the study of failure, it also requires insane amount of depth in the very specific field which the company was operating in.

It's the same problem as failed replicated studies not getting documented, it's easier and higher incentive to just try again than to study the issue.

Really just needs to be a non profit that would run the studies and maybe turn it into a consultancy or something.

6
z3ugma 1 day ago 3 replies      
Looks like we hugged it to death: "Error 508. Resource Limit Is Reached"
7
simplydt 1 day ago 0 replies      
In case this helps, you might want to A/B test your homepage with and without coffins, watch the bounce rate. I bounced, the feelings created by my amygdala were too strong and overruled my curiosity or any desire to learn!
8
RangerScience 8 hours ago 0 replies      
I think a very useful feature for this site is a "I worked here and want to talk about it" listing.

That way, one thing I could do is research my idea / etc for similar startups that failed, call up the people, and ask what went wrong.

The goal here, it seems, is to help you not fail at something similar. The best information will come from those that worked there. So, why not index just enough information to find those people, and then help connect you to them?

9
Animats 1 day ago 1 reply      
I used to do something like this on Downside.com, but only for public companies where you could get the financials from the SEC. Then you have some hard data. Startup Graveyard seems to be listing companies that failed before they even launched. There needs to be some minimum qualification, such as "actually had at least one paying customer".
10
ChuckMcM 1 day ago 0 replies      
It would be interesting to include (but really hard to do) startups that 'exited' through acquisition that failed to clear the liquidation preferences. This sort of exit could also signal a 'mistake not worth repeating'.
11
sillysaurus3 1 day ago 0 replies      
It'd be nice if there were a summary of each company without clicking through.
12
abetusk 1 day ago 2 replies      
It's sad that all the content, source code and other intellectual property is just lost. Does anyone know of a resource that has or lets companies open source their code instead of locking it away forever? Like a farm sanctuary but for failed startups?
13
magic_beans 8 hours ago 0 replies      
This looks like a waaaay prettier version of http://autopsy.io
14
jparise 11 hours ago 0 replies      
Seeing this prompted me to revisit http://thecan.org/, "a pet cemetery for dead games."
15
sh87 1 day ago 0 replies      
Can someone build an AI out of this to predict success / failure factors ?

For the record : I'm being sarcastic :/

16
azr79 1 day ago 0 replies      
This website should in the list too.

Reasons of failure:

- Couldn't scale its web servers

17
jorgeer 1 day ago 1 reply      
Seems like front page HN is too much for this website. Only getting 500s and error pages.
18
vtange 1 day ago 0 replies      
Great idea. Site is hugged to death at the moment I think

A quick issue that arose in my head is what happens if people turn away from ideas because they see it in the graveyard? Some ideas may have seen the light of day too soon...timing for startups is important, as shown in this TED talk - https://www.youtube.com/watch?v=bNpx7gpSqbY

With this resource at hand, people might mistaken bad timing for bad idea.

19
wand3r 1 day ago 0 replies      
This is cool and it's always smart to check past implementations like this. As long as you learn why the failed not that the idea is impossible. VR is a good example; I didn't see any companies on that list but many gave up on VR because it was too early to be possible until a few years ago. Another good example is digital currency.

Also; you can learn from a comp like Clinkle which should def be added to the list

20
nkkollaw 20 hours ago 0 replies      
> History Shouldn't Have to Repeat Itself

I used to follow another startup shutdown site that got shut down, though. :-)

Cool idea, I love reading those. I have no clue if reading about failures can make you successful, though.

21
traviswingo 1 day ago 0 replies      
I like this idea; however, this is very subjective. Laying out 6 solid reasons for _why_ a startup failed is a bold statement. In reality, there usually isn't a concrete reason why, but I guess it's a good exercise to at least think about it collectively.
22
bikamonki 1 day ago 2 replies      
By plain logic: if there isn't a formula for success, there isn't one for failure either.
23
flor1s 1 day ago 0 replies      
Reminds me of a website focused on failed Kickstarter projects: http://kickended.com
24
proaralyst 1 day ago 0 replies      
25
chiefalchemist 1 day ago 0 replies      
Startups, by definition, fail. The reasons are endless. Attempting to sum those up in what looks to be too few words is helpful how?

Entertaining. But is it useful?

26
beat 1 day ago 1 reply      
Kind of sad to see a product you liked on this list. RIP Grooveshark.
27
chrshawkes 14 hours ago 0 replies      
This wordpress website is slow as balls.
28
adamzerner 1 day ago 0 replies      
I know this is very tangential, but I hate the design decision to have the hamburger menu mixed in to the graveyard stuff at the top of the site. It reminds me of those hidden object puzzles. It very much prioritizes aesthetics over usability. With more thought, the aesthetics could be included without having to sacrifice usability.
29
JansjoFromIkea 1 day ago 0 replies      
Nearly every image on the site being a ~600x450px jpeg seems wasteful when almost all of these have easily accessible (or at least easy to recreate) SVGs.

Unless wordpress doesn't support SVGs?

30
robotnoises 1 day ago 2 replies      
Wow, rdio.com redirects to pandora...
31
bigbossman 1 day ago 0 replies      
perhaps the domain startupgraveyard.ai would better reflect these times
32
kevinmannix 1 day ago 0 replies      
How does one submit?
33
lightedman 23 hours ago 0 replies      
This will have been the millionth incarnation of this idea I've seen in 20 years, from books to videos to websites.

I'm guessing nobody's actually paying attention to history, otherwise the first book/video/website would've been all that was needed.

34
draw_down 1 day ago 0 replies      
Sometimes ideas don't work but then they do. Or vice-versa, sadly.
35
pokemongoaway 1 day ago 2 replies      
Startup Graveyard: A website for cataloging failed startups

Reason for failing:"Error 508. Resource Limit Is Reached"

30
Noun Project Icons for Everything thenounproject.com
393 points by vikingcaffiene  13 hours ago   120 comments top 32
1
felixthehat 10 hours ago 3 replies      
I put up a couple of hundred icons that I'd drawn previously up with a public domain license. Here are the download counts and royalties from the last 10 months for reference: http://i.imgur.com/clNZWUk.png
2
everyone 11 hours ago 11 replies      
Little pictograms for buttons are often extremely vague and open to interpretation.

Eg. Heres a random game where upon opening it I was immediately confused http://mobile.cdn.softpedia.com/apk/images/color-switch_1.jp...(also the icons are all animated / rotating, making it even more confusing, made this game menu stick out in my memory as an egregious example)

Some designers seems to love the minimalism of if though, from an aesthetic reason, I assume.

Imo Text is the clearest thing you can put on a button to tell the user what that button does 'save' 'load' 'new game' etc. Yes, it will need to be translated, but thats simply putting in more work for a better product.

3
Animats 30 minutes ago 0 replies      
Don't submit them to the Unicode Consortium. Please. Do not want.

There are now over 2600 emoji. Enough already.

4
jtraffic 10 hours ago 1 reply      
They also have a cool API: http://api.thenounproject.com/, which is used by automated logo services logojoy[0] and Tailor Brands[1].

I think Google's autodraw[2] + noun project would make an excellent pairing.

[0] https://www.logojoy.com/[1] https://www.tailorbrands.com/[2] https://aiexperiments.withgoogle.com/autodraw

5
wtvanhest 11 hours ago 0 replies      
This has been around for years. I was thankful for them when I was building my now failed company. I didn't end up licensing anything, but had I got to revenue, I certainly would have. Thank you to your team.
7
em3rgent0rdr 41 minutes ago 0 replies      
Icon's are the written language of the digital world.
8
Jdam 9 hours ago 0 replies      
Nice service but HUGE WARNING: Do not sign up to E-Mails, as they are super-spammy.
9
xs 10 hours ago 6 replies      
I think $40/yr or $2/icon is too far out of reach for me to use these icons as a proof of concept, blog, or early website launches. I have a website idea which would use about 100 of these icons, but I can't justify the cost yet since it brings in no revenue.
10
Phlow 10 hours ago 1 reply      
What I really need is a quality Verb Project
11
WalterBright 10 hours ago 1 reply      
There are about a million words in the English language. The site has a long way to go to get to "everything". Me, I'll stick with phonetic alphabets, which have long displaced icons.

Interestingly, iconic languages usually wound up assigning sounds to the icons and were transformed into phonetic alphabets. Icons just aren't practical.

12
freekh 8 hours ago 1 reply      
This really reminds my of the glory days of clip art :) I for one really liked those so no harm meant :)The next thing is clippy coming back (oh wait that's all those chat bots) :)
13
apo 10 hours ago 1 reply      
Valuable resource with many uses beyond putting icons on buttons. For example:

- source graphics for figures in books and articles

- inspiration for new ways of expressing an idea graphically

- gauging variation in visual representation of a concept

- learning how to make a particular shape with vector graphics

- logo ideas

14
tlogan 5 hours ago 1 reply      
I would be using this service if they offer all possible resolutions with one click. I assume this is service which should be used by non-designers to quickly get icons for some proof of concept or similar.

Am I missing something?

15
STRiDEX 9 hours ago 1 reply      
I read some designers discussing the filter icon usually being a funnel which doesn't do much filtering. It does funnel something into something. https://thenounproject.com/search/?q=filter interesting to see the other options that show up.
16
Steeeve 4 hours ago 1 reply      
How does this work when you purchase a royalty free license and then use it in an open source project?
17
qznc 7 hours ago 1 reply      
I always look for the icons with multiple gears. Have not found one which would actually work: Gears not touching, size difference blocks, or three gears in deadlock. :)
18
rch 11 hours ago 0 replies      
I can certainly see how this is useful, but is there also a complementary site that provides the opposite? I try to avoid using icons that resemble everyday items (the 3.5 inch floppy for 'save' is a good example).

Edit: there's actually a ton of good abstract stuff in here. Should have looked more closely.

19
dvcc 11 hours ago 1 reply      
Cool, but I can't see the footer when infinite scroll is enabled to check the pricing when viewing an icon.
20
sosodaft 11 hours ago 1 reply      
TOS acceptance box and "you can send me emails" box combined. Classy.

All I wanted was a bacon egg and cheese icon...

21
blauditore 6 hours ago 1 reply      
Searched for "anything" - got asked:

 Did you mean anyhting?
I wonder what's going on there...

22
deevolution 10 hours ago 0 replies      
It would be nice if they implemented machine learning to cluster icons with similar style together. Then your queries could return only icons in the stlye youve selected!
23
tmaly 8 hours ago 0 replies      
This is awesome, I was actually just looking for a set of food icons I could purchase for my side project.
24
tinhangliu 10 hours ago 0 replies      
I love the Noun Project! It will be amazing to have a plugin to get all their icons on WordPress
25
libeclipse 11 hours ago 1 reply      
This site is sick.

I mainly use it for finding nice logos for my github projects, and they really do not disappoint.

26
tomelders 10 hours ago 1 reply      
No icons for 'schadenfreude'? Pffft, stop wasting my time.
27
swayvil 4 hours ago 1 reply      
It was inevitable. We are going to hieroglyphics.
28
gfody 10 hours ago 4 replies      
searched for "save" .. ten pages of variations on the 3.5" diskette icon all labeled "floppy" even though diskettes aren't floppy. I wonder when a new save icon will emerge?
29
bbcbasic 6 hours ago 1 reply      
They even have icons for "Fuck", e.g.

https://thenounproject.com/search/?q=Fuck&i=54850

Iconic karma sutra here:

https://thenounproject.com/term/sex/

30
artur_makly 10 hours ago 0 replies      
use them all the time. been around a while though!
31
backpropaganda 10 hours ago 2 replies      
Let's go back to 1000 BC, and re-invent Chinese.
32
WalterBright 10 hours ago 0 replies      
Someday, someone will invent this thing called "words" that can be looked up in a dictionary when you don't know what they mean.
       cached 13 May 2017 04:11:01 GMT