hacker news with inline top comments    .. more ..    15 Sep 2017 News
home   ask   best   6 days ago   
visited
1
Firefox Multi-Account Containers blog.mozilla.org
309 points by nachtigall  3 hours ago   87 comments top 28
1
ff_ 1 hour ago 5 replies      
I LOVE this feature, but it has only one problem: when I'm in a container and I press Ctrl+T (new tab), the new tab opens in the default container. This doesn't make sense, I want it to stay in the same container.

This was also discussed in the issue tracker, in a now closed issue, in which the intuitive behaviour (staying in the same container) was proposed, but got sidetracked and in the end implemented something totally different.

So if anyone from Firefox is listening here: please PLEASE consider implementing Ctrl+T in the same container :)

2
nicoburns 3 hours ago 4 replies      
This is incredibly useful. It's basically like Chrome's 'profiles', except per-tab rather than per-window. So I can now have my personal gmail, my work gmail, and the 3rd gmail account for a client set next to each other, and colour coded.

This, along with the speed improvements (both the UI and content processes) in Firefox 55 have made it my default browser for the first time since Chrome was released.

3
notheguyouthink 2 hours ago 2 replies      
Cool!

As an aside, I've been migrating away from Chrome for a while - and I posted here a while back being dismayed by how terrible Firefox was, how slow/etc it was, etc. Many people suggested I switch to nightly.

Nightly is .. a night and day experience. I've been fully switched from Chrome now, thanks to Firefox. Note that on OSX I've had no complaints with Safari as my Chrome replacement, so I've stuck with them - but on windows it's all Firefox.

Keep up the great work guys, the new stuff is amazing. Hope you can push it to stable branch soon for people. :)

4
wslh 2 hours ago 1 reply      
For historical reasons I cannot avoid to mention that I achieved this with my Cookiepie extension 11 years ago (a billon Internet years ago indeed): https://youtu.be/2Pfg-kJ4nAw Cookiepie was written only in JavaScript and was very hackish because Firefox APIs didn't have a way to correlate network requests with the tab in the UI, so I traversed network and UI objects recursively to find unknown relationships between them. It was very difficult to support because even minor Firefox releases broke it.

I even posted my Cookiepie extension for the first Firefox extension contest [1] and there was no prize or mention for it.

[1] https://blog.mozilla.org/press/2006/03/mozilla-announces-win...

5
raimue 2 hours ago 1 reply      
I am a long-time user of the deprecated Multifox extension and I have switched to the Firefox containers ever since they have been introduced to the stable releases about a year ago. This feature is actually builtin in Firefox, you only need to change some config settings to enable the UI (probably this is also all what the linked extension does?).

As Multifox was one of the old XUL/XPCOM extensions, I am glad that this functionality was integrated natively before Firefox 57 will disable all extensions that are not WebExtensions.

It is a great way to login to multiple accounts on various sites such as Twitter, without going through the hassle of a full logout/login cycle. You can use the accounts side-by-side in different tabs, which will be color coded to indicate which container they belong to.

More details can be found on the Mozilla wiki:https://wiki.mozilla.org/Security/Contextual_Identity_Projec...

6
mey 1 hour ago 1 reply      
I use Chrome profiles heavily, so I am very happy Firefox is exploring this feature. When doing consulting, I like to keep different client activities isolated to their own profile, so I have less things to juggle if they use the same cloud service (AWS, G Suite, Jira, etc).

One limitation I currently see to that workflow (that works better for me in Chrome) is that this appears to all reside under a single Firefox Account which essentially creates master set of data to Sync. I would like to be able to setup Containers to be pegged to different Firefox Accounts (or not at all).

7
ihateneckbeards 2 hours ago 3 replies      
That's great, I hate when Youtube recommends me all kinds of videos about turtles just because I stumbled over a video of turtle 6 months ago
8
kodablah 1 hour ago 2 replies      
I am building my own Chromium-based browser with a similar concept called "bubbles" [0][1]. Not doing a show-hn on it until next week because I need to build another release but feel free to try it out (I recommend building over using the one in the releases area as a lot of bugs have been fixed in master).

Oh and for the commenter wanting Ctrl+T in the same container, a Ctrl+Shift+T in Doogie does open a child page in the same bubble.

0 - https://github.com/cretz/doogie

1 - https://cretz.github.io/doogie/guide/bubble

9
jqs79 31 minutes ago 1 reply      
How does this compare with using multiple profiles and the -no-remote flag? Does this manage only cookies, or does it also separate local storage (HTML5 session/local/global/web sql database), webcache, window.name caching (if the same tab can use multiple profiles), web history, flash cookies, for those who still have flash installed, etc.

People might get a false sense of security if all of these methods of saving data in the browser are not also separated along with cookies.

10
lucaspottersky 5 minutes ago 0 replies      
this is _AWESOME_!!!

maybe the best feature since HTML5 has gone mainstream!

i'm so tired of using Icognito Window for that!

11
gangstead 33 minutes ago 0 replies      
I've been wanting something like this for Android / ios.

I've had the problem that many restaurant rewards program have gone from "10 punches on this card and your next sandwich is free" to "type in your phone number / scan this card" on each visit and have now become "install our app" to get that free sandwich. That's more than I'm willing to give up for a cheap meal once every few months.

12
emerongi 1 hour ago 0 replies      
This has been extremely useful for the past month or so that I've been using it. I separate work accounts and personal accounts and that has tremendously simplified using the browser. For Youtube, I can use a different Google account without logging out of my main one. I call it my "Entertainment" container - maybe it will also make it harder for agencies to connect my leisure activities to other activities.

I even have a "Testing" container when I'm testing a webapp and need to log in with 2 different users in the same window. Very convenient.

13
josefresco 1 hour ago 0 replies      
I implement "containers" simply by using different browsers (one for each screen). Chrome runs my (Google) email, calendar, drive. And then I use Firefox for my client work, where I log in/out of various client identities. I have Firefox set to "nuke" all session data on close - an absolute must-have feature for testing caching issues and making sure I don't end up with "hidden" active sessions around the web.
14
DoubleMalt 31 minutes ago 0 replies      
That might bring make me bring back part of my browsing to Firefox. The identities functionality was what made me use Chrome almost exclusively the last two years.
15
ComodoHacker 1 hour ago 1 reply      
I doubt this will help much with privacy. People's laziness plus cognitive effort needed to track what container you are in plus various tricks from advertisers and publishers will keep vast majority of users perfectly trackable.

Chrome's approach at least helps to keep multiple profiles visually separate.

16
nothrabannosir 2 hours ago 8 replies      
I've been using this for over a month now, and while I'm convinced it's the right idea, the implementation leaves much to be desired. Currently, it costs more effort than it's worth.

[EDIT: comments show this does exist! great]Missing: easy way to open a new tab in a specific profile. ctrl-T always opens in Default profile, not the one you're on. So have to go File menu -> New tab -> select profile. And that menu changes items around slightly, so no muscle memory. I end up going to a tab already open, middle clicking a random link, ctrl-L, and using that as a fresh tab. I see on their little drawings they show some cool drop down under the + button at the right of the tab row, but I can't find any such functionality.

[EDIT: Comments show exists. Good enough!]Missing: a way to fix certain hosts to certain profiles. E.g. {XXX.myclient.com -> always open in "Client X" tab}. E.g. with links from GitHub (which is client independent) into custom CIs (jenkins etc). You forget, "why isn't this logged in? oh, profiles", go back, right click the link, open in new container -> select container. Ugh.

Missing: a way to disallow any non-whitelisted hosts from a tab. E.g. having a gmail tab is useless, because every link you click will open in that profile (and you won't notice because hey, it works) and now your gmail credentials and cookies are available there. Again defeats the purpose. Especially for a "Banking" tab, for example.

Missing: clear warning that this doesn't do anything meaningful against tracking. It's a complete waste of time to separate your Facebook into a separate profile if you don't want to be tracked across other domains. Fingerprinting goes well beyond cookies. They don't need your account cookie to link your visits.

Missing: segmentation of plugins!! Different NoScript or block settings per profile? yes please! Or even just native Firefox settings (3rd party cookies, clearing policy, etc) per website per profile would be lovely.

All in all: I'm stubborn so I'll keep using it, but I'll be honest: there's quite a low ROI on them, as they are. Good start, hope they improve.

EDIT: Another missing: clear cookies only from a certain profile. E.g. discover I've accidentally been browsing youtube in work profile (or whatever), I want to delete all youtube cookies _but only from that profile_. Can't do it. I encounter this problem often with GMail, where I want to clear a friend's login but not log out all my sessions from different containers.

(PS: Sorry for using "profile" and "container" interchangeably---it was a bit stream of consciousness. I mean "container" for both words).

17
wutwutwutwut 2 hours ago 1 reply      
I just want to be able to open a new tab in a brand new container. Similar to File->New Session in Internet Explorer.

If I want to test my web app with say 4 different identities then figuring out which container is "free" becomes cumbersome.

18
newscracker 1 hour ago 0 replies      
I've used this from the days of it [1] being in Test Pilot (an add-on for experimental features) [2] and really loved the idea. Usually I'd use a couple of different browsers or shuttle between normal and private browsing/incognito modes for using multiple logins on services (from a privacy standpoint, I don't like linking accounts together on any service, like for example how Google allows users to do).

I did provide feedback to the developers on the following:

1. Opening new tabs should have better intelligence about which container a user wants to go with.

2. Improving the look of the tab bar for better tab visibility and clarity on which tab was the current one (the latter point always grates me when I use Chrome in incognito mode).

3. Detailed and clear documentation on how containers work across normal windows and private windows, because I certainly wouldn't want to use something believing that it's providing me isolation while it does not in certain scenarios. In my limited knowledge, the behavior of different browsers, in keeping cookies/storage isolated, in private/inprivate/incognito mode varies when it comes to multiple windows, multiple tabs and closing windows/tabs. That is already not clear enough (to me) that I don't open more than one private/inprivate/incognito window at the same time.

I would love for this to get into Firefox main instead of being an extension!

[1]: https://testpilot.firefox.com/experiments/containers

[2]: https://testpilot.firefox.com/

19
ams6110 45 minutes ago 1 reply      
Firefox has supported multiple profiles for a long time. How is this better?
20
doe88 1 hour ago 0 replies      
Just a word of caution, anecdotely I installed the Container extension/feature 2 weeks ago when this was discuted on HN, I opened some tabs in different contexts, copied important links I wanted to keep, then decided to hide them, then finally yesterday I wanted to read one of these links, I go look in the menu... Pouf gone, all my links gone... Least to say I was happy.. Therefore not only I have uninstalled this feature but also Test Pilot altogether. I decided from now on to keep things simple because it seems this is only what really works. Maybe I'm rambling a bit, but the sad truth is I don't have much trust in Firefox anymore, I use it because it is to me the least worst browser, not because I really enjoy using it.
21
amelius 1 hour ago 0 replies      
Meanwhile, the websites I visit are tracking me across containers :(
22
maxerickson 2 hours ago 0 replies      
Can any browser historian explain why the original models of cookie sharing weren't more like this?

I figure it comes down to some combination of lack of consideration and performance concerns, but that is just speculation.

I suppose restricted cookie sharing is also a lot more complicated for the user.

23
1024core 1 hour ago 0 replies      
> and online trackers cant easily connect the browsing.

For what definition of "easily" ?

24
wakkaflokka 3 hours ago 1 reply      
I'm on the nightly and it says the extension is not compatible with my version of Firefox. Is this only for <56?
25
anovikov 2 hours ago 0 replies      
Will be cool for many Upwork account brokers
26
teekert 2 hours ago 1 reply      
Huh? Did I just add this and got a "Firefox screenshots" icon with it?
27
nasredin 2 hours ago 1 reply      
Am I in a paralel universe where Firefox updates bring useful features, make the browser look less like Chrome and do not break backwards compatability with thousands of extensions?

Integrate Tor and you might suddenly stop being ShittierChrome and stop losing your market share.

28
akerro 2 hours ago 2 replies      
How did they know I have multiple personalities? Are they watching me?
2
Show HN: Nulis Open Source Tree Editor for Writers github.com
113 points by rayalez  4 hours ago   37 comments top 17
1
stevesimmons 39 minutes ago 0 replies      
It is a shame more people aren't aware of Leo (http://leoeditor.com/).

Leo is a hierarchical editor with a tree view of nodes, which can be in Markdown, ReST, code, and create compound docs a la literate programming from nodes, etc.

The killer feature imo is nodes can be cloned to multiple places in the tree. So you could have, for example, group notes by project in one master last, plus another sub-tree for clones from the master list of your own current projects.

Leo's web site may look clunky and old-fashioned. But don't let that put you off!

-- EDIT --

For what it's worth, I do oscillate between Leo and Emacs org-mode. I'd probably stay 100% with org-mode if there were a way to clone nodes. Does anyone know if this is possible?

2
vincnetas 1 hour ago 5 replies      
Fun fact. Nulis means zero (0) in Lithuanian. I wonder what other product names unintentionally hit some word in some exotic language. One from top of my head was https://en.wikipedia.org/wiki/Mazda_Laputa (we know what "la puta" means in Spanish)
3
Tenobrus 34 minutes ago 0 replies      
Best and most general option that I know of is Org mode (Emacs). But regardless, I'm glad more people are picking up on the power of hierarchical text editing (and interfaces more friendly to non-programmers are a reasonably good thing). But if you really want to go all the way, and get arbitrary directed graphs of text content, check out org-brain.
4
Multicomp 3 hours ago 1 reply      
This visually reminds me of some of the Federated Wiki concepts Ward C has been working on for awhile [1]

[1] - http://sandbox.fed.wiki.org/view/welcome-visitors

5
ckluis 3 hours ago 1 reply      
This would be more useful if like project gutenberg (wordpress editor redesign) each card/block could be "structured data" (image, text, table, etc) - it would be pretty neat tool to use for a Knowledge base.
6
discreteevent 1 hour ago 1 reply      
Like the look of this a lot. I'm a fan of mind maps for getting work done. I might suggest that unless you think that a large portion of your audience are vim users then you might use ijkl instead of hjkl. For a non-vim user ijkl is completely natural but hjkl will be annoying.
7
alfonsodev 42 minutes ago 0 replies      
I like the idea, this could be useful for education, I see it as mindmap for taking notes.
8
fifnir 43 minutes ago 0 replies      
Hey there's a little typo in "How it works":

"beraking down your large writing goals"

9
fimdomeio 2 hours ago 1 reply      
This is kind of the same idea behind Xanadu[1] right? I fail to see the benefits of this multidimensional navigation through text. Getting lost seems way to easy in this kind of interfaces.

[1] - http://xanadu.com/

10
Lonte 1 hour ago 1 reply      
Fun fact nulis means writing in indonesian
11
pizza 50 minutes ago 0 replies      
fyi, i wrote something fairly intricate in tree format and the account creation process deleted it without confirmation after logging me in
12
aargh_aargh 51 minutes ago 0 replies      
Looks nice. Just take note of the AGPL license.
13
xori 3 hours ago 0 replies      
I think the original app that inspired it was gingko[1] but it's nice that we can see the source of this one[2]

[1] - https://gingkoapp.com

[2] - https://github.com/raymestalez/nulis

14
gcoda 3 hours ago 1 reply      
Thank you.I use gingko, it is perfect but not self hosted. And tiddlyWiki with graph map extensions got too much extra stuff I do not need.

Not sure how I feel about mongo as backend but, if this app got same export features as gingko I am sold.

15
nslindtner 3 hours ago 1 reply      
Love the idea. Reminds me of Workflowy (which has focus on task list in trees).

Check it out: https://workflowy.com/

16
fsloth 3 hours ago 1 reply      
Would someone use this? I think a text editor is a pretty good tree editor as it is but then again, as a programmer I see everything as parse trees anyway :)
17
kaushalmodi 1 hour ago 0 replies      
As a plain-text editor fan, I use Org mode in Emacs for Tree-based documentation and blogging.
3
With $600M 'Blank Check' IPO, VCs Experiment on Startup Listings bloomberg.com
20 points by schintan  1 hour ago   5 comments top 3
1
heisenbit 1 minute ago 0 replies      
Distributed ledgers may be a better way to manage ownership. But that innovation is best suited for standardized stakes subject to audits, regulation and oversight. Startups is where everything is non standard and no track record has been established. Innovation in contract mechanics adds little value and the way it is done at the moment it takes away a lot of the already limited transparency and accountability.

Expect to get hurt once the bubble dynamic fades.

2
schintan 22 minutes ago 2 replies      
Matt Levine doesn't think this is a good idea:https://www.bloomberg.com/view/articles/2017-09-15/icos-vcs-...
3
mrleiter 27 minutes ago 0 replies      
I recently stumbled upon a "project" called Own Austria, which promises you to own a part of Austria's products. Essentially you buy partial ownership of a fund that only invests in Austrian companies. The selection criteria? A)They must be Austrian and b)"be important in regards to jobs and revenue". There is no clear investment strategy. I chuckled and moved on.

This somehow looks a bit the same, although the people behind this are quite promising.

4
Why is it faster to process a sorted array than an unsorted array? stackoverflow.com
225 points by tosh  3 hours ago   73 comments top 17
1
taeric 1 hour ago 3 replies      
By far the most interesting part of this post is the update with newer compilers. Intel's compiler, in particular, makes an awesome optimization.

Edit: The update is not new, either. Just the part that I found interesting. Apologies for any confusion.

2
QuotesDante 11 minutes ago 0 replies      
My immediate reaction would be to think of the entropy of the information in the array. Sorted sounds like energy was spent to put more information into that array. Intuitively, the lower entropy of a sorted array should help us predict and make better decisions along the way of searching for things in the array. Completely unsorted arrays give us less information to work with: the lack of order certainly can't help us make decisions!
3
cimi_ 42 minutes ago 0 replies      
I had a surprise with sorting in JavaScript a few days ago, details also on SO.[0]

TL;DR: Chrome uses quick sort and we managed to hit its worst case by pre-ordering the input alphabetically on the server side.

[0] https://stackoverflow.com/questions/46228556/how-is-array-so...

4
corey_moncure 2 hours ago 6 replies      
The question I have, which I do not see answered after a quick glance of the top responses, is: What is the total time of sorting + good branch prediction loop, versus not sorting and bad branch prediction loop? Does the good branch prediction save more time than it cost to sort the array, on modern processors? What about old processors with shorter pipelines?
5
bluejekyll 2 hours ago 0 replies      
The update at the very end is pretty awesome. Basically saying, some compilers will optimize this for you now so there's no difference.

It would be interesting to see how Clang/LLVM do...

6
gene91 1 hour ago 1 reply      
It appears to me here that it is trivial for a compiler to use a conditional instruction (instead of a branch) here. As a result, I'm very surprised that it didn't. Any idea why this is the case?
7
xigency 15 minutes ago 0 replies      
I'm currently taking an OMS CS course in High Performance Computer Architecture and recently completed the branch prediction lesson. It is a free course on Udacity. Here's the source: https://www.udacity.com/course/high-performance-computer-arc...

Lesson 3 covers pipelines and lesson 4 covers branches. Milos does a great job of explaining "How It Works" for something that is really a hidden layer under the CPU.

8
jnordwick 43 minutes ago 0 replies      
Needs a "(2012)" at the end of the title. Also I'm not sure when cmov changed to 1 cycle latency, but it might effect some of the tangential topics here.
9
localhost 44 minutes ago 1 reply      
Read a little further down to WiSaGaN's answer. There's an excellent discussion of the optimizations afforded by the cmovge instruction that is typically generated for the C ternary operator and how its implementation in the CPU pipeline allows it to avoid the branch misprediction penalty.

He references a textbook as well "Computer Systems: A Programmer's Perspective (2nd Edition)". Just bought a copy.

10
ramshorns 1 hour ago 1 reply      
Some of the suggested ways to avoid the branch are bit manipulation (which is not necessarily portable) and the ternary operator (which seems hardly different from an if-else, though maybe the compiler usually treats it differently). It seems like another way would be sum += data[c] * (data[c] >= 128);which adds 0 when the condition is false.
11
samfisher83 2 hours ago 1 reply      
After its sorted you could add a break condition when the array is less than 128 that way it would make it faster.
12
wiz21c 2 hours ago 1 reply      
When I look at the code, I'm under the impression that will generate always the same output because the random seed is fixed (besides the timer). Given current tech, could a compiler see that and just reduce the computation to a simple value ?
13
wiredfool 2 hours ago 2 replies      
(2012)
14
minikites 2 hours ago 0 replies      
This is a fabulous explanation that I think a novice programmer could readily grasp.
15
tapatio 2 hours ago 6 replies      
Imagine that as an interview question!
16
LucMomal 1 hour ago 1 reply      
Imagine if words in your dictionary wasn't sorted and you will have your answer.
17
typon 1 hour ago 0 replies      
Excellent answers
5
There Is No Reason for Any Individual to Have a Computer in Their Home quoteinvestigator.com
34 points by sohkamyung  2 hours ago   28 comments top 5
1
gregmac 44 minutes ago 4 replies      
> To understand the mindset of this period it is important to recognize the distinction between a computer terminal and a free-standing computer. Some experts believed that individuals would have terminals at home that communicated with powerful remote computers providing utility-like services for information and interaction. These experts believed that an isolated computer at home would be too under-powered to be worthwhile.

Considering how the majority of the general population's use of "computers" is really interacting with remote and cloud services (Facebook, Netflix, etc) this is a valid argument today. While today's equivalent of terminals (phones, tablets, and even PCs) have a significant amount of local computing ability, for the most part they really are just used as fancy terminals. Gaming is maybe the one mainstream application where local compute is important.

The other quote that stuck out to me was this:

> The personal computer will fall flat on its face in business because users want to share files and want more than one user on the system,

2
gnicholas 45 minutes ago 3 replies      
Funny how we're now at a time where we're moving past computers. I thought this was going to be about how you don't need a computer at home anymore because you can do so much on tablets, phones, watches, and TVs.

When the Apple Watch with LTE was announced, I wondered when people will start to have a watch but no phone. When the iPad got LET, senior execs shifted to them and raved about the experience. It worked for them because they mostly just did email and a few other things, unlike worker bees who need a traditional computer OS. I wonder if a similar thing will happen with higher-ups foregoing phones for watches.

My guess is that in the next 3-4 years, we'll start seeing people talking about how amazing it is to go around without a phone. Perhaps a tablet for doing "real work" and a watch for everything else.

3
ender89 44 minutes ago 0 replies      
George W. Mitchell's quote was pretty much spot on considering the context of the time, he was basically describing an internet connected pc which was communicating with servers somewhere that do the real computing. Now obviously even the cheapest netbook has more computing power than anything he was thinking of at the time, and your average gaming computer is vastly more capable of computing than what the home terminal he was envisioning was, the fact of the matter is that we're moving more and more towards the thin client model where the majority of your computing is done in the cloud. Hell, I'm sitting in front of a macbook pro that could tear your face off but I rely on the web and a remote server to run my word processing software.
4
yial 49 minutes ago 0 replies      
I guess some of the ideas were half right of the future to come, considering how many of our devices now make use of cloud computing.
5
jandrese 43 minutes ago 0 replies      
The computers there were talking about would be analogous to the big iron supercomputers of today. The quote "There is no reason for any individual to have a supercomputer in their home." is much closer to reasonable.

It does show a lack of foresight on their part, but it's not as completely wrong as it looks on the surface.

6
Mystery of sonic weapon attacks in Cuba deepens theguardian.com
365 points by nikcub  10 hours ago   171 comments top 29
2
Osmium 9 hours ago 6 replies      
> And no single, sonic gadget seems to explain such an odd, inconsistent array of physical responses.

Is it possible the sounds heard were illusory, as a result of whatever caused the brain damage? So not a sonic weapon, but some other mechanism of action?

> The blaring, grinding noise jolted the American diplomat from his bed in a Havana hotel. He moved just a few feet, and there was silence.

So something that can be focused at a specific point. If not a sonic weapon, then it has to be electromagnetic?

Talk about bizarre. Whole thing reads like a conspiracy theory. Presuming the reporting is accurate, it's hard not to believe people were specifically targeted with the internet to harm them (rather than the alternative explanation of an espionage technology gone wrong), if the effects were localized on their beds (i.e. a specific physical location where they would be known to be for several hours).

3
CapsAdmin 4 hours ago 5 replies      
Some of the sound descriptions in this article sounds very similar to an on going experience I had, although without the health problems (I hope?)

It turned out to just be a bad phone charger. I would charge my phone at night (samsung s4 at the time) and when fully charged and idle it would make a very high pitch and faint sound that sounds like "morse code" or digital noise (I believe the sound characteristics is the CPU doing stuff and drawing power?)

The sound was only be audible in some places of my appartment and it would depend on how my head was turned. I remember hearing this sound even in my dreams.

I've heard this sound outside as well depending on how I turn my head. But lately I haven't heard or maybe noticed anything. I'm also getting older though so maybe I've lost the abiltiy to hear high pitched sounds like that.

Googling "keep hearing morse code" and similar terms reveals that many people have had similar experiences but many wild conclusions.

There might be something shady going on in Cuba but it wouldn't suprise me if bad adapters and sockets would get lumped together with it.

4
qubex 5 hours ago 3 replies      
I have, on occasion, inadvertently taken combinations of medication that caused extremely disorientating and apparently very loud "buzzing" in my ears as soon as I woke up, combined with a strange sense of immobility.

Of course that entailed no brain damage or loss of hearing, but it was a very compelling sensation, and it caused massive panic (the first time at least, the second time... somewhat less so, but I do get the feeling that I had an emotional response that was in some sense 'synthetic' and due to the pharmacological reaction).

Hence, a subset of these symptoms might be pharmacological, but not all of them. That leaves open the question of motive.

(Yes, I understand that a partial explanation of symptoms and no explanation of motive makes for a totally bankrupt theory, but hey, I just thought I'd throw in my anecdote - and I'd really rather not explain the particular combination of medications.)

5
subroutine 5 hours ago 4 replies      
I was skeptical whether sound could be focused into a beam, but apparently inventor Woody Norris has developed a technology called Hyper Sonic Sound that accomplishes this feat. As one experiencer puts it - In the sound beam's direct line, you hear the audio signal as if through headphones, regardless of background noise. Outside the beam, you hear nothing. HSS works by generating two types of ultrasonic waves, both inaudible to the human ear. Once those waves reach an object (like your head), they crash together and re-create the original sound. Ultrasonic waves also conserve sound for 150 yards without distortion or volume loss.

Here is his TED talk demo...

https://www.ted.com/talks/woody_norris_invents_amazing_thing...

edit: so I've been reading about this Woody Norris guy; I don't think anyone could possibly be more conceited. From his own website (http://www.woodynorris.com/WhoIsWoodyNorris.htm):

Who is Woody Norris? Quite simply, Woody Norris is a visionary, a futurist. He looks into the future, gathers insights into what will make life better, and applies them to the world of today. He sees things that the rest of us do not. And, as the future arrives, it finds his inventions and products already in place.

6
salimmadjd 6 hours ago 1 reply      
I'm puzzled by the motivations behind these attacks. It makes no sense. Who is going it and why?

Is it Cuba's version of deep state? Do they feel threatened they would lose control because there is no longer a western enemy? Is it people within our own government (our deep state) that feels we need to have this enemy or somehow they're benefiting from it? Is it the Russians that want to creat this division to have the option of having a place close to US? Is it factions within our own government wanting to blame this on Russia and take us back to the Cold War and find an excuse for expansion of intrusion, etc?

7
JKCalhoun 1 hour ago 1 reply      
I wonder at this point if the data isn't being polluted by psychosomatic "symptoms" from the diplomats that are now aware of the mysterious events in Cuba.
8
rgrieselhuber 4 hours ago 2 replies      
This is something that would have sounded like a conspiracy theory if not reported in a mainstream publication. Amazing what just a little corporate cachet can do for a story.
9
dharma1 8 hours ago 2 replies      
Santeria spells, or too many mojitos.

Joking aside, mostly likely overpowered microwave espionage equipment they got from Russians a long time ago, and that is still in use.

Google "US Embassy microwave moscow" to find out what happened back in the 70s.

10
bhouston 3 hours ago 1 reply      
Sounds like directional microwaves that heated up the brain.
11
projectant 7 hours ago 1 reply      
Could it be a chemical agent combined with RF or micowave field that activates it?

Seems like pure microwave would cause a range of random hallucinations and sensations. And also accidentally target people who were not spies / employees.

I think biological agent that's activated by EMF is more likely.

12
heisenbit 2 hours ago 0 replies      
Very localized sound could that be some resonance effect related to the building? The building looks like it consists of a very regular structure.
13
m12k 4 hours ago 2 replies      
I'm reminded of the Sonic Tank from the old Dune 2 RTS. I wonder if sound is finally being weaponized or if it's actually microwaves as others have suggested? But just as curiously, what is the motivation for this, what could someone achieve by doing this? Is it a form of gaslighting? Or someone trying to sour the relationship between Cuba and US/Canada? (either dissidents or another state) In some ways this has a similar "shooting pigeons with cannons" high-tech overkill smell to it as the polonium poisoning of that Russian in the UK a decade ago.
15
diminish 9 hours ago 0 replies      
Beyond sonic a conclusive list of options could also include: attacks with gas form, liquid form, radioactive form, solid form - could be through the meals they ate, the booze they drank, a drug they took, a sea or other animal.

I'm curious if there are any links to independent explanations or investigations maybe from local sources? [1] this from another thread.

[1] https://www.justsecurity.org/44289/sonic-attacks-diplomats-c...

16
0898 9 hours ago 3 replies      
Directional audio has been possible for about a decade now. Holosonics.com is one manufacturer but there are others.
17
tqkxzugoaupvwqr 6 hours ago 0 replies      
Could it be the attacks are at night because thats when the diplomats cell phone rests on their nightstand? Maybe their electronic devices are the target and it just so happens that the diplomats are close by and become collateral damage.

Edit: I assume an electromagnetic attack. The sounds the victims hear could be caused by that, see Wikipedia links in this thread.

18
dsiegel2275 4 hours ago 0 replies      
My first thought was that it is JavaScript fatigue.
19
PerilousD 3 hours ago 0 replies      
There were some studies converting audio to inaudible audio when sent over over pulsed microwaves. This would resolve the size and distance issues for the alleged device and the fact that this was at night and in bed could play into the original studies for using this tech.
20
henearkr 7 hours ago 1 reply      
Could it be a powerful sonic transmission of data?I mean, EM waves are hugely monitored, so if somebody had to send data secretly, an ultrasonic way may be used.And there may have been a side effect due to too much power and the diplomats being in the middle of the beam.
21
johnhenry 2 hours ago 1 reply      
> ...baffling US officials who say the facts and the physics dont add up

This is a long shot, but Cuba is pretty close to the Bermuda Triangle -- a place with a history of strange incidents supposedly involving physical anomalies. https://en.wikipedia.org/wiki/List_of_Bermuda_Triangle_incid...

22
basicplus2 9 hours ago 4 replies      
Sounds most likely a microwave attack, you dont damage the brain that way with audio
23
throwawaylalala 1 hour ago 0 replies      
Why has LRAD never come up?
24
farseer 6 hours ago 0 replies      
I wonder how one would go about detecting that such a sonic attack is in progress. What would the equipment to detect such an attack even look like?
25
jokoon 5 hours ago 1 reply      
This kind of news makes me think that wars are not deadly like they were in world wars. Weapons are being so accurate, I don't think world war 3 (if it happens) will result in so many casualties.

So in a way technological weaponry will save civilians. Militarily and strategically, there will no point in using nuclear weapons against civilians too, like it was done in japan.

This kind of news reassures me that conflict is getting "cleaner".

26
soufron 6 hours ago 2 replies      
Have they thought about cuban rum and spirits? It looks like a far more plausible explanation for these night attacks
28
tryingagainbro 4 hours ago 0 replies      
>>The State Department detected high levels of radiation in the embassy staff, and provided hazard pay to personnel who worked in Moscow.

While this is harsh, these guys are soldiers, working night and day to essentially destroy that country. (we can argue till the cows come home about what system is the "right one") Live by the sword, die by the sword.

29
scrrr 6 hours ago 0 replies      
My first thought: Look, this is Cuba. Home of con artistry. They will stage a UFO landing to get more tourists spend convertible pesos. For me P(heard-in-havana-and-true) is around 0.1, max.
7
Long-range communication with devices that consume almost no power washington.edu
35 points by sverige  2 hours ago   6 comments top 5
1
andrewflnr 23 minutes ago 0 replies      
We need to talk about the security implications of low-power devices broadcasting data. I cringed when the article talked about broadcasting "medically relevant data" for kilometers, in the clear. That's not OK. Everyone needs to be clear on the fact that that's not OK.

I'm ecstatic at the possibilities this has for reducing the power cost of communication[0]. Please use the savings on some (at least half-decent) crypto.

[0] Seriously, can this work for a consumer handset communication network?

2
cstross 1 hour ago 0 replies      
Wow, the old-time KGB would have loved this:

https://en.wikipedia.org/wiki/The_Thing_(listening_device)

(A passive resonant bug presented in a gift to the US Embassy in Moscow in 1945, designed by Lon Theremin, ancestor of today's RFID devices.)

The modern bugging applications don't even bear thinking about ...

3
neoh 25 minutes ago 1 reply      
Some Standford students seem to have done something quite similar already.https://web.stanford.edu/~skatti/pubs/sigcomm15-backfi.pdf
4
eutropia 17 minutes ago 0 replies      
I'm concerned with the environmental impact of:

"...farmers looking to measure soil temperature or moisture could affordably blanket an entire field to determine how to efficiently plant seeds or water. "

Littering chips all over the place?

5
ryanmarsh 38 minutes ago 0 replies      
I didn't see Mhz one time in the article and frequency only once with no numbers.

2.8km... is that line of sight?

This article is beneath HN.

Link to the paper: http://longrange.cs.washington.edu/files/loRaBackscatter.pdf

9
Cassini's Saturn Mission Goes Out in a Blaze of Glory npr.org
133 points by okket  3 hours ago   47 comments top 5
1
herodotus 1 hour ago 2 replies      
Pictures from these kind of NASA events always show rows and rows of people sitting at computer screens. I have always wondered why there are so many of them, and what they are doing. Obviously there is a lot of work to be done to process the data that is received, but why so many real-time people? Is it just organizational bloat?
2
dmix 27 minutes ago 2 replies      
> "We don't have a gas gauge. It would be really nice if we did," Molly Bittner, a systems engineer at JPL who has worked on Cassini for the past four years, tells NPR. Instead, mission controllers had to estimate the amount of fuel used by each maneuver. And there had been lots of maneuvers since 2004.

Does anyone know why they couldn't have a way to measure the amount of fuel left? This seems like a relatively easy engineering problem from a non-expert perspective.

3
ccozan 1 hour ago 3 replies      
Linking my comment to a sister submission [1]:

Final Images ( choose Grand Finale )

https://saturn.jpl.nasa.gov/galleries/images/

I wonder if it got destroyed or just lost the signal due to the atmosphere.Amazing times to live.

[1] https://news.ycombinator.com/item?id=15256753

4
luckyt 3 hours ago 4 replies      
> Scientists [were] worried that when [Cassini] loses power, it could crash into a pristine moon, contaminating a place where we might someday search for life

Why is this so bad? Moons are pretty big compared to spacecraft, why are they so worried about the environmental damage of a single spacecraft weighing a few tons?

5
adventured 2 hours ago 1 reply      
"It's 13-year mission to explore the strange world of Saturn went on nearly a decade longer than planned."

Another testament to NASA's approach of relying on extremely well tested, older technology.

When I was younger I didn't entirely agree with that approach (imagining what could be done with the latest & greatest camera tech etc etc). Having lived long enough to see a few of these very extended duration missions, has entirely put me in the corner of agreeing with their approach.

This is also why it's ok, more often than not, that NASA missions cost so much (a more frequent criticism these days). 20 years in space; 13 engaged in a highly functional, active mission; truly incredible.

10
Google Chrome to stop autoplaying content with sound venturebeat.com
56 points by piyushgupta27  1 hour ago   15 comments top 10
1
outsidetheparty 26 minutes ago 2 replies      
It's a start. Allowing users to disable autoplay altogether would be better, but shutting down the audio is better than nothing.

It's a shame that browsers increasingly have to actively defend against website behavior, instead of just being tools for displaying those websites; but, well, here we are.

2
MBCook 23 minutes ago 0 replies      
> The content is muted, or does not include any audio (video only)

Sigh. So no surprise audio but nothing to stop the trend of silent video ads or (even worse) muted live TV?

How about NO auto playing live video or files over 250k? Don't waste my bandwidth streaming stuff I'm not trying to watch.

3
danso 23 minutes ago 0 replies      
Autoplay seems to be something universally reviled and yet the trend seemed to be that enough people tolerated it to make advertisers and sites double down on it: https://www.theguardian.com/technology/2017/jul/19/facebook-...

> During Facebooks announcement of the feature in February, product manager Dana Sittler and engineering manager Alex Li said: As people watch more video on phones, theyve come to expect sound when the volume on their device is turned on.

> *"After testing sound on in News Feed and hearing positive feedback, were slowly bringing it to more people.

Likely Chrome's feature won't affect FB given how much FB is on mobile but I'm interested in how media sites, such as CNN, will be impacted. Seems like they've become dependent on throwing up an annoying newscast video (with post-roll ads) for every article. Many times unrelated.

4
amelius 8 minutes ago 0 replies      
As much as I like this idea, I'm sure that this will also cause problems in certain cases.

Therefore, I'd like to see per-website "capability settings", just like you can allow smartphone-apps to access your microphone, camera, et cetera. And an API for websites to determine which capabilities they have, and an API to request the browser to ask the user to add a capability.

5
stablemap 13 minutes ago 0 replies      
6
Sir_Cmpwn 9 minutes ago 1 reply      
>Chrome will only autoplay a given piece of content when the media wont play sound or the user has indicated an interest in the media.

I wonder if a certain "Tube" site will have "indicated interest" on by default, hmm...

8
camus2 15 minutes ago 0 replies      
I'm not sure why all these websites decided that auto-playing video is a good thing. It's annoying as fuck. I click on an article for reading, I'm not clicking on a youtube playlist.

The latest craze among web designers being videos that follow the reader even when they are fucking scrolling down the page! Enough!

Can we have a designer here justifying why they do this? If it's about engagement, i'm not going back to your website if you do that to me once.

9
RUG3Y 19 minutes ago 0 replies      
A step in the right direction.
10
artursapek 11 minutes ago 1 reply      
Doesn't YouTube technically autoplay videos? Is this going to disable that?
12
The design side of programming language design tomasp.net
59 points by panic  4 hours ago   9 comments top 4
1
hood_syntax 6 minutes ago 1 reply      
A key point (perhaps well known but still very important) is the "let mutable" vs "let" for declaring mutable variables. It really does change tendencies of developers by putting the burden of effort on one of two paths, and encouraging people to write code a certain way can have significant effects on the end product.
2
everdev 1 hour ago 1 reply      
Really appreciate this article. Some languages feel like their syntax was designed by developers and others feel like it was designed by designers. It seems totally appropriate that the UI (language design) should be a different skill and created with a different mindset than the back end (language implementation). I hope that this focus can lead to more beautifully designed languages, not just faster languages.
3
azhenley 19 minutes ago 0 replies      
There has been some great work in this area (but not near enough!).

One of the more well known pieces that is worth a read is Cognitive Dimensions of Notations [1]. I've even used them in my research on the usability of debugging tools.

It is composed of 14 dimensions to evaluate your design (of a PL or UI). They are:

- Abstraction gradient

- Closeness of mapping

- Consistency

- Diffuseness and terseness

- Error-proneness

- Hard mental operations

- Hidden dependencies

- Juxtaposability

- Premature commitment

- Progressive evaluation

- Role-expresiveness

- Secondary notation and escape from formalism

- Viscosity

- Visibility

[1] https://en.wikipedia.org/wiki/Cognitive_dimensions_of_notati...

4
14113 33 minutes ago 0 replies      
I think this article misses a crucial point with it's phrasing, or construction of the "design vs mathematics" tradeoff. Mathematics is superior for underpinning programming languages because it is universal. Design, like art, relies on a shared view on the world to be appeasing. Design from the 70's, or 80's is often unappealing to modern viewers because of the lack of shared experience with the designer, for example.

Mathematics solves this by appealing to a shared underlying "truth", that allows not only programmers with different backgrounds, but also computers to understand and process a programming language.

13
The IPv6 Adoption Curve infoblox.com
14 points by Sami_Lehtinen  1 hour ago   2 comments top 2
1
jzl 1 minute ago 0 replies      
Seems like one of the largest sources of IPv6 adoption is mobile phone networks. I'm on AT&T in the US and if I check "what is my ip" when on LTE it shows me an IPv6 address. Mobile now accounts for well over half of all web traffic, so if/when mobile networks are universally IPv6 then we'll basically be over the hump.
2
sebazzz 28 minutes ago 0 replies      
I like to change to IPv6 and my current ISP can make it happen (Ziggo, The Netherlands). However, I have currently my own externally accessible IPv4 address and I will lose that when I opt-in to IPv6 because Ziggo only offers IPv6 with DS-lite. DS-lite means I cannot access my network anymore over IPv4, so I will not change to IPv6.
14
Malicious software libraries found in PyPI posing as well known libraries gov.sk
358 points by nariinano  4 hours ago   150 comments top 40
1
hannob 2 hours ago 2 replies      
Ok, here's some ugly backstory on this:This problem has been known for a while, yet both the pypi devs and the python security team decided to ignore it.

Last year someone wrote his thesis describing python typosquatting and standard library name squatting:http://incolumitas.com/2016/06/08/typosquatting-package-mana...

However after that the packages used in this thesis - the most successful one being urllib2 - weren't blocked, they were deleted. Benjamin Bach was able to register urllib2 afterwards. Benjamin and I decided that we'd now try to register as many stdlib names as possible.

See also:https://www.pytosquatting.org/

2
chatmasta 4 hours ago 5 replies      
Package managers seem to be an increasingly popular attack vector. It's only luck that none of the attacks have been particularly malicious yet. Considering how many package manager downloads go to a server in a datacenter, a widely distributed malicious package could control a botnet with extremely high throughput, or wreak havoc on any databases it comes into contact with.

It's only a matter of time before something like this happens. A big part of the problem is that application package managers, like pip or npm, are far less sophisticated than those of operating systems, like aptitude or yum. It needs to be easy for developers to open source their code, and to mark dependencies with precise commit hashes, but the download also needs to be secure and verifiable. There are many difficult tradeoffs to consider in terms of usability, centralization, security and trust.

3
kasabali 3 hours ago 5 replies      
Yet another attack vector that doesn't exist at all in Linux distributions but invented by language package managers, sadly.

They solved the issue 2 decades ago by heavily vetting packages before accepting them into repositories. Users are allowed to add and use packages from 3rd party repositories.

Maybe solution to this is creating curated repositories based on publicly open ones and using them by default (and requiring opt-in for using other repositories). Conda for Python and Stackage for Haskell seems like relevant solutions.

4
raesene6 3 hours ago 1 reply      
This isn't, in any way, a new problem. I did a presentation on this topic for OWASP AppSecEU 2015 (https://www.youtube.com/watch?v=Wn190b4EJWk&list=PLpr-xdpM8w...) and when doing the research for that I encountered cases of repo. attacks and compromise.

IME the problem will continue unless the customers (e.g. companies making use of the libraries hosted) are willing to pay more for a service with higher levels of assurance.

The budget required to implement additional security at scale is quite high, and probably not a good match with a free (at point of use) service.

5
thearn4 4 hours ago 2 replies      
It looks like the code phones home to a server in China:

IP:121.42.217.44Decimal:2032851244Hostname:121.42.217.44ASN:37963ISP:Hangzhou Alibaba Advertising Co.,Ltd.Organization:Hangzhou Alibaba Advertising Co.,Ltd.Services:None detectedType:BroadbandAssignment:Static IPBlacklist:Click to Check Blacklist StatusContinent:AsiaCountry:China cn flagState/Region:ZhejiangCity:HangzhouLatitude:30.2936 (30 17 36.96 N)Longitude:120.1614 (120 9 41.04 E)

6
IgorPartola 4 hours ago 2 replies      
This to me is the nightmare scenario. Well one of the two, the other one being that a developer of an obscure library I use has their password to PyPI compromised and a bad actor uploads a backdoored version of the library.

Fundamentally, the reason this is different from how thinks like Linux distos work is because Linux distros have maintainers who are in charge of making sure every new update to one of their packages is legit. I am sure you can try to sneak malicious code in, but it isn't going to be easy.

I am not advocating that PyPI (and npm) adopt the same model. That would be too restrictive. But maybe just showing the number of downloads isn't the best way to assure whether the package is legit. Perhaps some kind of built in review system would be nice.

7
ris 9 minutes ago 0 replies      
Hooray for the "wild west" model of package repositories.

Come back maintainers & packagers, all is forgiven!

8
defined 1 hour ago 1 reply      
Here's something that contributes to typosquatting: the lack of responsiveness by package management organizations to claims on orphaned or unmaintainable packages.

People who upload packages often leave organizations, who are then stuck with a package they can't update because the password went with the person, and the email reset link points to a now-defunct email address.

Petitioning the package management team is sometimes fruitless, forcing a needless new instance of typosquatting.

9
pishpash 28 minutes ago 0 replies      
Maybe packages should be signed by several trusted maintainers. Or, noticing PyPI packages list a source code link on github sometimes, along those lines, there can be a process to prove ownership of some known online identity, keybase style. Unpopular packages can also be flagged, especially one that has a near twin that is much more popular. There are many solutions.
11
rantanplan 4 hours ago 5 replies      
The regex they have for identifying fake/harmful packages is wrong.

`pip list format=legacy | egrep '^(acqusition|apidev-coop|bzip|crypt|django-server|pwd|setup-tools|telnet|urlib3|urllib) '`

This incorrectly lists `urllib3` or the `cryptography` package for example, which are perfectly valid packages.

[UPDATE]

Read "tobltobs" comment below. I incorrectly removed a trailing space from the regex.

12
mwexler 1 hour ago 0 replies      
Both Anaconda (for Python, https://docs.anaconda.com/anaconda/packages/pkg-docs) and Microsoft (for R, https://mran.microsoft.com/) have "reviewed and audited" collections of packages for their languages. That's part of what you pay for when you buy support for the open source tools.
13
Sir_Cmpwn 3 hours ago 2 replies      
I think a more Linux-like approach to package repos is better - a curated package repository run by volunteers in maintainership roles. Then you have a human being verifying the upstream and keeping malware out, and get more consistency across packages as a bonus. If you want your package added it's as simple as sending an email and provides a new avenue for people to contribute to the success of the ecosystem as package maintainers.

When you make the next big thing, consider this approach.

14
singularity2001 3 hours ago 2 replies      
"Success of the attack relies on negligence of the developer"

How about package manager managers accept their enourmous responsabilty? urllib vs urllib2, one is a virus? Sorry but that is not "negligence of the developer"

15
bhouston 3 hours ago 1 reply      
I bet there are quite a few malicious NPM packages that we do not know about.

Is Node is used in government and military solutions? If so then the NPM ecosystem is likely targeted by state actors, and it is a sitting duck.

16
jastr 55 minutes ago 1 reply      
To check a few you different requirements.txt files (will look 3 folders deep)

find . -maxdepth 3 -name requirements.txt | xargs egrep '^(acqusition|apidev-coop|bzip|crypt|django-server|pwd|setup-tools|telnet|urlib3|urllib)'

17
mwerty 2 hours ago 0 replies      
How about a Levenshtein distance threshold for new package names to be accepted? I.e only allow names that are different enough from the existing set to avoid typos (or whatever errors we are trying to guard against)?
18
EstDelenda 3 hours ago 0 replies      
Any method of software distribution which is not rooted in cryptographic author verification against a fine-grained, user-manageable trust store should be put bellow the sanity waterline, 20 years ago.
19
elcapitan 2 hours ago 2 replies      
Would it be possible to have a general package manager (like apt) as reusable base for the individual language specific package managers? I know that npm and pip and gem etc all do some additional stuff, but at the core they all do the same (pull packages from repo, do some postinstall, resolve dependencies, maybe in some cases even check if the package is legit). So we could implement and check that once and then just reuse it like we do with many other libraries for image processing etc.
20
dpflan 4 hours ago 1 reply      
This is interesting in conjunction with the recent post about Python's popularity because that may be a weakness exploited here [1.]. It's easy to use and install and get libraries for anything, and apparently libraries for infecting your machine :(.

[1.] https://news.ycombinator.com/item?id=15249348

21
ehnto 3 hours ago 0 replies      
Part of my dislike for the Node ecosystem in particular and I am sure others have a similar problem, is the dependency trees are super complex.

Because packages tend to be small and many, and each of those has their own dependencies, you can end up with hundreds of packages installed which is simply impractical to manually review.

It is not node, but we do in fact manually review each package we utilize for our given language because it's feasible and worthwhile as the dependency tree is small in this ecosystem. Each and every package is a possible attack vector whether that be intentionally or just because it's poorly written and we can't simply ignore that because it's the done thing and "the community reviews them".

22
1ba9115454 2 hours ago 3 replies      
Unless your package manager enforces signatures and you trust the person that signed the package. Then this is an attack vector for you.

That includes Java (Maven), Ruby (Gems, Bundler), Node (npm), Haskel (stack) etc etc.

Installing code via package managers is the coders equivelant of opening up an exe sent to you in an email.

Code downloaded from the internet is not to be trusted.

23
phonkee 47 minutes ago 0 replies      
Do not forget their password that worked for couple of years: nbuSR123 ...
25
pishpash 1 hour ago 0 replies      
Maybe gov.sk should be vouched for too, I mean what's the chain of trust here? Why should I trust anyone?
26
atticusberg 2 hours ago 1 reply      
to see if you have any of these deps on your python path:

pip list format=legacy | egrep -e '^acqusition$' -e '^apidev-coop$' -e '^bzip$' -e '^crypt$' -e '^django-server$' -e '^pwd$' -e '^setup-tools$' -e '^telnet$' -e '^urlib3$' -e '^urllib$'

to see if you have any projects in a given directory that require them:

cat $(find /path/to/dir -name 'requirements.txt') | egrep -e '^acqusition==' -e '^apidev-coop==' -e '^bzip==' -e '^crypt==' -e '^django-server==' -e '^pwd==' -e '^setup-tools==' -e '^telnet==' -e '^urlib3==' -e '^urllib=='

27
julianj 3 hours ago 0 replies      
Looks like they missed one:

https://pypkg.com/pypi/xml/f/setup.py

Dork:site:https://pypkg.com intext:"just toy, no harm"

28
kumarvvr 3 hours ago 1 reply      
Whoa urlib & urllib3. Those are pretty popular packages, especially to newbies. Hundreds of websites that teach web-scraping use those libraries.

Wonder what is an effective form of protection against such attack vectors?

Do digitally signed certificates fit into this usage scenario??

29
justinsaccount 3 hours ago 1 reply      
well shit, I guess I should have followed up on this after I noticed it 2 months ago.

https://twitter.com/JustinAzoff/status/881163562739277824

30
wiradikusuma 3 hours ago 2 replies      
Anyone know if this is also an issue for Java? I've used Maven repository for ages, and I know many big cos depend on it.
31
jamespo 4 hours ago 0 replies      
I wonder if it's worthwhile having a check that compares closeness of the name to existing popular packages and if so does some extended vetting.
32
amykhar 2 hours ago 0 replies      
some of this could be helped by intelligent naming of packages. If something is called urllib, name the package urllib because that's what people are going to look for.
33
a3n 4 hours ago 1 reply      
Dry run?
34
fruiapps 4 hours ago 0 replies      
Curious to know something similar happening for Scala.
35
VMG 4 hours ago 0 replies      
how likely is it that npm and other package managers that do not use digital signatures by default are unaffected?
36
EGreg 3 hours ago 0 replies      
Here is the general problem with dependencies:

When a dependency changes, all the projects that directly depend on it should get notified immediately and their maintainers should rush to test the new changes, to see if they break anything.

There is no shortcut around this, because if B1, B2, ... Bn depend on A1, the consequences may be different for each Bk.

The only real secure optimization that can be done is realizing that some of the Bk use A1 the exact same limited way and thus make an intermediate A1b that depends on A1 which those Bk's depend on. These "projection" builds may be automated by eg the set of methods called by the B's.

Anyway, this is the way that iOS does it before iOS 11 comes out to users. They release a beta to all developers. And they even fix bugs in the beta before releasing to the public.

Without beta testing periods, you can get laziness and just auto-accepting of whatever cane out.

There is be an "alpha release" feature in git where maintainers might put out the next version to be tested by all who depend on it. THIS FEATURE SHOULD NOTIFY THE MAINTAINERS SUBSCRIBED TO THE REPO. THE BUILD ITSELF SHOULD GET ISSUES AND RATINGS FROM MAINTAINERS AS THEY TEST THE NEW BUILD. And releases should not be too frequent.

This is the way to prevent bad things from happening. But that also means that the deeper the dependency is, the more levels this process could take to propagate to end-users.

37
EGreg 3 hours ago 0 replies      
This is why I am not a huge fan of using package managers. I like to understand the code we put into our platform, and vet it. And not have it change under us automatically, after that, but review the changes manually before accepting it.

I felt a bit curmudgeonly but we have a responsibility at https://qbix.com/platform for all our apps being secure. I wanted to use repos for each package and manually git pull or hg pull them when they changed.

I was finally convinced by our developers to just use package managers with version pinning. Honestly it's really hard to avoid package managers, especially for all the newer functionality such as Payment Requests or Web Push. Luckily there is version pinning.

We want our clients to feel secure that we vetted ALL the code that went into the platform. So our package json (and composer.json) uses version pinning. We'd rather take a bug report and manually fix it than NO bug report and have a SHTF moment.

38
kevin_thibedeau 2 hours ago 0 replies      
Python needs a way to run 2to3 during package installation that doesn't use setup.py (setup.cfg or wheels). As it stands now, you have the hassle of building a release four times if you want to support all combos of Py2, Py3, 32-bit, and 64-bit platforms. The absent support for 2 to 3 migration in the safer alternatives is why I stick with setup.py.
39
cdnsteve 3 hours ago 1 reply      
I'm all for security but this hit a nerve with me:"Success of the attack relies on negligence of the developer, or systemadministrator, who does not check the name of the package thoroughly."

Package managers need to do more. If they had an enterprise version that you could subscribe to monthly/annually invoice that you would get enterprises onboard, they are concerned about security and will pay. Developers like us will help encourage it. I'd rather not see some third-party "secure" package managers but make them part of PyPi and send funding to the Python foundation. They are seeking donations but that doesn't work well with businesses. Make it a monthly/yearly service.

40
Aissen 2 hours ago 1 reply      
I'm glad Go completely sidestepped the name-rush induced by this type of package managers (composer, cpan, rvm, pypi, npm). Just provide an URL. Done.
15
Patreon raises big round at $450M valuation techcrunch.com
72 points by doppp  7 hours ago   54 comments top 7
1
joelthelion 3 hours ago 4 replies      
That means they're going to expect a lot of revenue. Which they plan to take on money that people give freely. I don't see that ending well.
2
outoftacos 1 hour ago 3 replies      
Oh god, now they're trapped in the endless useless growth demand cycle. Too bad, I enjoy supporting a few artists on there and it's just a matter of time until this hurts them.
3
sixdimensional 29 minutes ago 0 replies      
I remember the first time I saw a music video by Jack Conte on YouTube, way before he started Patreon. He did a bunch of cool music videos (example https://youtu.be/lBUUOJpFg9Y) . Also, his work with Pomplamoose was pretty awesome.

I remember hearing him talk about his idea for Patreon on some of those YouTube videos. He seems like a pretty nice guy, so I am super excited for him to hear that Patreon has been such a great success!!

Let's hope they can continue their mission and keep taking it to positive places from here!

4
raverbashing 3 hours ago 3 replies      
Patreon is fighting the good fight so far, especially with YT promoting more disposable content with every passing day
5
gourou 3 hours ago 4 replies      
They probably have the biggest platform for paying creatives, I wonder when they'll ditch YouTube to get more control over their user base and increase their 5% margin.
6
EGreg 2 hours ago 2 replies      
What if there was a crypto-currency for this? :)
7
dandermotj 2 hours ago 5 replies      
That's a P/E of 60... Also this:

> In exchange, Patreon takes only a tiny 5% cut.

I hope this is sarcasm, because if I had to pay 5% to make any other transaction I'd be fuming. With a marginal cost of facilitating a new patron near zero, I don't think creators or patrons will suffer this as Patreon grows.

       cached 15 September 2017 16:02:02 GMT