I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov? This feature is a massive boost for intelligence services dealing with unsophisticated actors. This reduces the haystack significantly, by users self flagging messages that may be incriminating. Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.
Why doesn't FB just apply encryption on all messages? Surely they have the resources avail. Is it because this feature makes somebody else's job a lot easier?If my suspicions are correct, what sort of threats would this pick up. Are serious threats likely to use FB messages with 'secret conversation' flagged to co-ordinate actions?
- FBM is multi-device, and we'd like to see E2E usability improve to support this. For now, pick one device and keys never leave it
- Secret conversations don't currently support popular features like searching message history, switching devices, voice/video, etc
- Hundreds of millions use Messenger from a web browser. No secure way to verify code or store keys without routing through mobile.
"We don't want to disrupt people's current experience."
It's going to be pretty high standards of proof to give this anything that resembles credibility.
also, here is the white paper (from the above post): https://fbnewsroomus.files.wordpress.com/2016/07/secret_conv...
Page 10 of the white paper mentions that there is a remote key stored on Facebook servers which can be used to decrypt the local key. If Facebook still is to be trusted, I don't see what's the deal here.
I think that as soon as you put the words "end-to-end" encryption on a marketing material, you have to be ready to open-source your client. This is the cost that companies aiming to be credible can't escape.
End-to-end encryption without open-source has no value. It is a waste of energy for the company doing that too - or perhaps a marketing cost.
If you want to resist mass surveillance this is not a good solution.
This app has access to:
find accounts on the device read your own contact card add or remove accounts
find accounts on the device read your contacts modify your contacts
precise location (GPS and network-based) approximate location (network-based)
edit your text messages (SMS or MMS) receive text messages (SMS) send SMS messages read your text messages (SMS or MMS) receive text messages (MMS)
read phone status and identity read call log directly call phone numbers reroute outgoing calls
modify or delete the contents of your USB storage read the contents of your USB storage
take pictures and videos
view Wi-Fi connections
read phone status and identity
receive data from Internet download files without notification control vibration run at startup draw over other apps pair with Bluetooth devices send sticky broadcast create accounts and set passwords change network connectivity prevent device from sleeping install shortcuts read battery statistics read sync settings toggle sync on and off read Google service configuration view network connections change your audio settings full network access
Your goal of making encryption easy to use by the masses is coming come true. It looks as if PGP's days are numbered.
I still find it hard to believe so many people trust what they believe to be private communication with close-lipped advertisement companies.
Reading the technical docs now (https://fbnewsroomus.files.wordpress.com/2016/07/secret_conv...).
Edit: Yep, this seems device-to-device; there doesn't seem to be a web component here. Still useful given how many people use messenger primarily via phone, and I suspect implementation wasn't hard given WhatsApp did it first. It would be neat to see if Messenger and WhatsApp are ever bridged through this.
What are the licensing conditions / restrictions for using the protocol?
Message appeared in the Facebook page as "encrypted message".
I guess you hardly can get better than this.
Now both whatsapp and Facebook have this, but surely they have the encryption keys too, or how else would they seamlessly fetch your messages and decrypt them when you get a new phone?
If they do, then what's the point?
I'm scared of what will be possible to extract from my chat logs in a few years, but the benefit of being able to IM people that only have FB feels greater right now.
Biggest problem I see so far is the multiple devices issue, but for most it will be just Desktop and Mobile, so why can't you send each message twice, encrypted separately for each device (automatically, not manually)? Does OTR3 have this feature?
Ya know, now that it won't be such a pain to support another protocol and all, since they're doing it anyway.
They deserve that reputation.
They should call it 'private conversations' instead of 'secret conversations' though.
This is insane--I thought the whole model of Facebook chat was that they are grabbing all sorts of info from the messages for ads. What the fuck?
- plain vanilla messages i get in Facebook web site
- 'chat' messages, were I to turn on 'chat' in Facebook web site
i'm not asking rhetorically. i honestly can't keep up with all the messaging avenues availabale today...
Additionally, from what I've gathered, they are going to role this out so that you have to specifically tell Messenger you want to encrypt a chat. Why would they not just make encryption universal? If anything, this makes it even easier for the government or other entities to target "suspicious activity." I'm far too skeptical of Facebook and how they are going about this whole process to be happy about it.
Can an exact match for a FB-provided binary be recreated from the open source code? If it's a No, then it's back to trusting FB to do the right thing and it doesn't make a slightest difference what exact protocol it's running or if the source was peer-reviewed behind closed doors.
This seems like it's a subtle endorsement for using Open Whisper Systems for criminal activities. Is it just me, or does that seem like the wrong image to gravitate towards?
During the landing, you'll here mention of 1201 and 1202 alarms. Here's what that's about:
Here's 1201 being called:
I think this one is my favorite module:https://github.com/chrislgarry/Apollo-11/blob/master/THE_LUN...
CAFCODE500# ASTRONAUT:PLEASE CRANK THETCBANKCALL#SILLY THING AROUNDCADRGOPERF1TCFGOTOP00H# TERMINATETCFP63SPOT3# PROCEEDSEE IF HE'S LYING TCBANKCALL# ENTERINITIALIZE LANDING RADARCADRSETPOS1TCPOSTJUMP# OFF TO SEE THE WIZARD ...CADRBURNBABY
"This source code has been transcribed or otherwise adapted from digitized images of a hardcopy from the MIT Museum. The digitization was performed by Paul Fjeld, and arranged for by Deborah Douglas of the Museum. Many thanks to both. The images (with suitable reduction in storage size and consequent reduction in image quality as well) are available online at www.ibiblio.org/apollo. "
I mean, I realise that this is the least of the amazing achievements we're talking about here, but yea.. respect :)
There's a mod for Kerbal Space Program which gives it real solar system planets and dimensions. (KSP's world is way undersized so things happen faster.)
There's another mod for Kerbal Space Program to give it real dynamics. (KSP doesn't really do dynamics right; the spacecraft is under the gravitational influence of only one body at a time. This is why there's that sudden trajectory change upon Mun capture.)
Someone should hook all that together and do a moon landing in simulation.
 http://svtsim.com/moonjs/agc.html https://www.reddit.com/r/KerbalSpaceProgram/comments/1piaqi/... http://forum.kerbalspaceprogram.com/index.php?/topic/62205-w...
TCBANKCALL# TEMPORARY, I HOPE HOPE HOPE CADRSTOPRATE# TEMPORARY, I HOPE HOPE HOPE TCDOWNFLAG# PERMIT X-AXIS OVERRIDE
CAA# SHOULD NEVER HIT THIS LOCATION
FWIW, I did performance analysis of the guidance computer and the 1202 and 1201 alarms at the start of my ACM Applicative 2016 keynote: https://youtu.be/eO94l0aGLCA?t=4m38s
I want to just leave comments like this and have users be responsible for avoiding said chaos.
It would be fun to do some research into the embedding of higher-level virtual machines in earlier computers. I'm thinking of 'The Interpreter' in the AGC as being an ancestor to 'SWEET16' in the Apple II (https://en.wikipedia.org/wiki/SWEET16), or the 'Graphic Programming Language' (http://www.unige.ch/medecine/nouspikel/ti99/gpl.htm) in the TI-99/4A.
The Apollo Guidance Computer (AGC) was a digital computer produced for the Apollo program that was installed on board each Apollo Command Module (CM) and Lunar Module (LM). The AGC provided computation and electronic interfaces for guidance, navigation, and control of the spacecraft. The AGC had a 16-bit word length, with 15 data bits and one parity bit. Most of the software on the AGC was stored in a special read only memory known as core rope memory, fashioned by weaving wires through magnetic cores, though a small amount of read-write core memory was provided.
Even setting that aside, what is it I'm looking at? Assembly?
The other recurring theme in the book is the disturbingly short MTTF for flight computers during the mid 1960s. Statistically, NASA had to plan for a computer failure in route to the moon, and so repair-vs-replace became a serious issue. (Yeah, they seriously considered soldering in zero-g.)
For instance, the type of memory was called core rope memory https://en.wikipedia.org/wiki/Apollo_Guidance_Computer
For anyone interested, XPrize winner Karsten Becker talks to popular youtube blogger David Jones about radiation, extreme heat & cold in space and specifically talks about bit flip and how electronic parts are sourced for such endeavors.
Interesting to me was the "paper work" cited in the interview for space harden components. In other words, people are concerned with stuff falling back to Earth (wouldn't it burn up?) or used for not so friendly purposes (war).
2048 words of RAM (4k 'bytes'/octets) using magnets?!
36,864 words of ROM
Ok this is a actually a really interesting read: https://en.wikipedia.org/wiki/Apollo_Guidance_Computer
I've often wondered many things about the cleanliness, maintainability and style of such code (this particular system, in fact). It's fun to be able to actually poke through it.
It's an incredibly well done and at times hilarious narration of the moon mission. (Spoiler: contains a part where Armstrong overrides the automatic control and lands manually)
This is probably my favorite presentation ever.
The CPU is built entirely from 3-input NOR gates.
Edit: .s files indicate:
# Copyright:Public domain.
It also reminded me of a recent article talking about how you can break audio codecs by guessing which quantizer was used by the packet, then using it in reverse to produce speech! Which I suppose is obvious in retrospect, that lossy codecs are trying to compress data by making it perceptually similar, whatever the domain.
I also appreciated the ties to video game networking. Gaffer on Games has had a long-running series on designing multiplayer networking protocols with UDP and you two approach bit-shaving very similarly (unsurprisingly I suppose - it's a very specific process with its own tools).
Anyway, thank you! I learned a lot.
I love projects/blogs like this, since it is "back to the basics" and we all learn something by better understanding how things from codecs, to compression, and so on work. This one is wonderful and one of the best reads this week.
You can be greedy and take the bandwidth anyway at the expense of everything else, but possibly (in some conditions) this may even cause a worse outcome even for your traffic. It's likely better to change your data rate target and drop rarely than to send too much and drop randomly at a higher rate.
As far as I remember, bandwidth was moreorless unlimited. It was nasty syncing issues I remember giving me nightmares.
The Dear ImGui library looks excellent with simplicity.
The stupidest, most counter-productive aspect of all these MOOC systems is the artificial schedule imposed on the course. While I've been able to take a couple to completion, I've had to let some by the wayside due to scheduling. Once that happens, you're disincentivized to catch up because of being behind and those that affect scoring. When I've gone back to finish courses that I had to leave by the wayside for the moment, sometimes I've lost access to the materials because the course has shifted to its next "semester". There's absolutely no reason for that. While there are a few courses like music or writing where you are collaborating or cross-reviewing other people's work, most of them are standalone lectures, homework, and tests.
Many of the courses on the old platform are slowly coming back on the new platform. When I built the list  of courses on the old platform the course count was 472, now its around 390. Some of the notables that I was excited to see come back are:
Neural Networks for Machine Learning with Geoffrey Hinton 
Computer Architecture from Princeton 
Programming Languages from UW by Dan Grossman 
Introduction to Natural Language Processing by Dragomir Radev 
Many of these courses were last offered a couple of years ago. Hopefully more courses form the list  start coming back.
Or do you have some Digital Ocean promotional credits left, that are about to expire? Spin up a (few) VMs with docker-containers running the warrior on DigitalOcean!
Credit Card, PayPal, Bitcoin. Brewster is an amazing Steward of the Internet Archive.
Video Tour: https://vimeo.com/59207751
If I am forced to buy one of these new Coursera certs, I will donate every time to the Archive Team.
I guess I'm asking, how is this legal?
That might be a source of income for coursera, probably enough to cover operational expenses of running the old platform with old content.
I'd love to see a site that specialises in user contributed content along the lines of Wikipedia. It's funny though - take SAP as an example: I'd be just as happy reading a book that explains it all better than what is out there right now! A book that assumes you are into technology but have little skills or knowledge of the business processes that SAP gets intoned in, and which gives you a rundown of this before giving a detailed rundown on how SAP implements these processes.
Sadly, no such thing exists, but happily for me I stumbled upon http://www.accountingverse.com/ and http://www.accountingcoach.com/ (no, I'm not affiliated with them in any way) and it turns out they didn't cost anything and I finally "get" double-entry book keeping, financial transaction concepts like the general ledger, journal, accrual method and the fundamental accounting equation. Wish I'd known this earlier to be honest - as I say, I lament that there are no books on SAP core modules that go from concepts to the nuts and bolts of how SAP does things :-(
Even as the questions were building and the noose was tightening, she would announce a public talk to reveal and openly share information on the company's tech, then mysteriously cancel a short while later -- she did this over and over.
I'm still not convinced that Theranos was meant to be a scam, or at least not a scam in the way most people are thinking about it. But it has definitely produced a similar output, and that makes it functionally equivalent.
I hope, really hope, that someday the true story about all the WTF-ness around Holmes and Theranos comes out.
I've been on the inside of a company like this once and I ran away as soon as I realized the place was up to no good. What still bothers me about my personal experience is that I, even as a person on the inside, still don't know the truth about that company due to all the same weird kind of cult of secrecy things we've seen at Theranos. The truth in these kinds of things, I suspect, lives only inside the heads of the people who run these kinds of organizations and it may not ever be possible to get at what the truth actually is depending on how far down the delusion hole they've fallen.
Real kudos to the press who broke and made very public the stories. This was the media at its finest. Those journalists may have helped save many lives.
This business, didn't pass a basic litmus test of objective criticism from people who work in the space.
There seems to be this bravado among founders who believe they're sticking it to all the people who say something's amiss. I think if there's an elephant in the room though, it probably should be assassinated with a huge body of transparent evidence.
I tend to see YC as being more "evolved" than other VC's, but I also think that these problems are more ingrained into the human condition.
YC could consider making funding contingent on all high level company officers attend or participate in some kind of ethics course. It could even be remote.
There could also be penalties built into VC agreements. If a company violates contracted ethical rules, then penalties could include replacement of staff, more shares being given to the VC, low share buyback prices for the VC, etc.
It could also be a two-way street. If YC violates some kind of ethical rule, then they could be punished by having to do something for the companies they represent.
Ethics are important but so far they have just been "best practices" in our industry and not something contracted and enforced.
My fiancee works as a laboratory scientist in a hospital conducting patient sample testing, and her and her co-workers take their work incredibly seriously- checking, and re-checking their work, with complex protocols to guarantee the accuracy of their testing. It's disappointing to see that same attitude lacking in Theranos.
Side note: Anyone else have trouble viewing the WSJ article? I had to read the full text through a private news outlet, even though I tried signing in to WSJ with Facebook :/
Edit: Added a sentence of detail to the last paragraph, for those who still can't see the article.
FYI, I am not the kind of guy goes around and preach about reform. But this shit is stupid as hell. I think major offense like these should be sent to trial or even requires congressional hearing and congressional punishment.
Maybe there is little precedent, but that length of time seems a bit arbitrary (and too short?).
Specifically, when he talks about naughtiness, he says:
Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter.
I have absolutely no interest in trying to play language games - and maybe I'm misunderstanding what pg is saying, but it has always seemed to me that the judgement of whether the rule that was broken was consequential or not is post-hoc. For example: had airbnb gotten in serious trouble and floundered in the aftermath of when they started scraping craigslist for listings, they wouldn't be clever and naughty, but reckless and foolish. Had zenefits managed to grow even more or hire some key lobbyists and get the law changed in time, their CEO would be hailed as a visionary genius that cut through pointless red tape.
Anyway: the reason I brought that up is that a lot of the ethically dubious things that Elizabeth Homes did are very similar to things which a lot of tech companies did at one time or another. Trying to push ambitious young men and women to look at rules and regulations as something they should take pride in hacking and bypassing is a dangerous game - even more so in fields that are highly regulated.
Addendum: part of the original hacker ethic was to ignore stupid rules. For example, we take delight in Feynmann cracking safes at Los Alamos, or finding some clever hack to bypass a pointless procedure. However: I think it's one thing to hack a system to make a point about how stupid it is, but it's completely different if you add a monetary incentive, and suddenly the rules that get broken are those that stand in the way of you making money. The two sets have a fairly small overlap.
Addendum two: in a complex society like the one we live in, we have a lot of dumb rules. I'm not trying to defend them - we should obviously get rid of them, even in healthcare. A lot of economists have written intelligently about how to make the approval process of the FDA more agile.
This is a company that's essentially had multiple parts of it for sale for the better part of 4 years and this hasn't once come out.
I'm running through all the Yahoo corporate filings that Bloomberg has indexed and I can't find any link to this clause.
I mean this is a very material issue!
If you are to be a share holder in a company that's trying to broker a 3-4 Billion sale of some of its assets, then knowing that the buyer may be on the hook for an additional 1 billion bill probably means that the asset you thought you owned is probably worth 20-25% less than you originally thought.
Someone isn't going to be very happy with the yahoo leadership today:)
That she agreed on Mozilla's change-of-control clauseso Mozilla can just walk away in case of a M&A deal and still get $1Bis simply disconcerting.
I do not have any insights and why she gave in on this point but I know that one of her main skills and responsibilities in her position is to negotiate well and do proper deals. She had to negotiate this change-of-control clause away or to let Mozilla sacrifice on the payout if they walk away. Moreover and considering that Mozilla doesn't have that many financial potential search partner options (Google has been with Chrome rather a competitor for many years now), this should have been possible, I'd assume with my limited knowledge.
I do not like if random forum guys like me are bashing CEOs, I know that this is the toughest job and I don't want to pass judgement on decisions I don't have insights on. But this is really, really weird and Marissa should have known that this bummer will pop up at the next due diligence and create distrust ('are they more time bombs at Yahoo? lets dig deeper') or just reduce the deal value or just increase deal complexity later.
Maybe she didn't think about M&A at that time and she was rather in a fire-and-forget mode but a CEO is always supposed to think about what happens if new shareholders join, about the next due diligence, heck just about the future of the company and eventually, to keep the company always in a proper and clean state and not leaving time bombs for potential successors.
- $375M/year to Mozilla, with the clause to keep paying for 3 more years if Mozilla doesn't want to do business with Yahoo's new owner.
- $450M/year to Mozilla, with no such clause.
The former seems like the smarter deal for Yahoo to make if they want to focus on being successful rather than on being bought. It only sounds problematic if you start focusing primarily on getting acquired.
Right. Wonder if she honestly thought this would have fixed anything or she knew the train is headed to the final station and just wanted to be surrounded by a group she picked and the only way to get them to do that was was to buy her friends.
It is always fascinating to watch a company like that, and wonder if executives still privately believe it is a salvageable situation or they just put up a face and ride the gravy train with some nice golden parachute contract clauses. They probably have to use euphemisms and hints with the board and other top level people to convey their suspicion of viability, as they don't want to be negative and just too pessimistic as it makes them look like liars in press releases, but they can't also be completely oblivious either, that looks bad as well.
In the context of that negotiation I could certainly see it coming up that Yahoo! might be acquired and Mozilla wanted some assurances if they went with Yahoo!. So neither Bing nor Google has that concern, so Yahoo! is the only one exposed.
Is this a normal phrase in the enterprise world? I don't think I've ever heard it before.
Yahoo has cash. Sure, they're not Apple in terms of liquidity, but they're not a startup either. Mozilla is - and I may need to solicit your agreement here - a Good Thing for the world.
I'm not convinced that being a Bil in the hole to Mozilla is so bad.
By contrast, there are several companies that are, according to some legal theories that may yet prove persuasive in court, in debt this much or more to governments by dint of their offshore accounting practices.
At which company do you prefer to be a shareholder? One which owes Mozilla a billion, or one which very well might owe an armed, hotheaded, unpredictable entity several billion?
Ultimately, if I'm a shareholder (and I'm not), I can forgive a billion dollars to mozilla more easily than the other Yahoo mis-steps.
Would Mozilla ever want to exercise this clause? I don't know - I wonder how they would react if Verizon/AOL ends up buying them, given that Netscape was bought by them early on.
I do question the science or numbers in the study as much as I believe the basic premise to be true, however, correlation does not automatically imply cause. People suffering more after trees are removed can also mean that urbanization or development brought factories, or unhealthier air, rodents or any number of other negative factors with it.
I do intuitively relax more, and take great solace in my surroundings, and I do believe it is better for people. I would like to see more research on this; there have been a lot of debacles in the past two years in the social sciences and psychology with statistics and peer review. Some of the studies were taken for granted and are now under the microscope for being inconclusive or just wrong.
Yea for trees! And plants, animals and all that entails!
There must be something about "natural forms" (as in varying, not changing, non-rectangular) that creates that feeling.
Take the example of NYC. The Upper East Side and Clinton Hill are two neighborhoods with a relatively large number of trees. Both of those neighborhoods are two of the most expensive and quiet neighborhoods in Manhattan/Brooklyn.
So it could just be that quiet streets and nice houses calm us down. But then again, maybe having more trees is what causes neighborhoods to be nice and quiet. As far as I can tell, it could go either way.
My own personal experience tends to confirm the main point put forth. Indeed, when we moved to the US Southwest several years ago, I thought I would miss oceans the most (having always lived on a coast). But no, I really miss seeing green -- my first time back east after moving here, the impact of seeing all those trees was really tremendous (& positive).
Having said that, the effect mentioned in the study can also be due to the amount of attention that a city street demands, and a lot of other factors. (Walking down Broadway in NYC in the middle of day just isn't the same as strolling through West Village on a Sunday morning!) Not to mention what other commenters have pointed out, e.g., correlation != causality. Quite likely the researchers have thought about this; I would be interested in what they found.
"This Is Your Brain on Nature"
Perhaps inspired by a similar line of thinking.
I found this the most interesting point in the article. I would have assumed that any psychological effect of viewing trees would be largely due to their greenness, since that is their dominant visual aspect. But, assuming a largely deciduous environment, naked trees in winter would seem to have the same effect. So the effect must be stimulated by something deeper than just raw color.
I wonder whether these principles could be incorporated into architecture and interior design, so we feel like we're in a natural setting even when indoors.
(Even better with trees visible through the windows, of course.)
> five times bigger in people who have been diagnosed with clinical depression
See permaculture, food security, urban farming, distributed production, decentralization.
Trees for some zen or aesthetic cause is an elitist and ignorant perspective. Land use in suburban environments is extremely poor. Food sustainability is very poor.
Trees are a good starting point to start researching. But there are much more serious reasons than a warm fuzzy feeling.
Houses on streets with trees are more expensive. People that can afford to live there are healthier for obvious reasons.
Likewise, people that are put into better hospital rooms are probably just patients the hospital is willing to expend more energy on, because they have deeper pockets/are the right ethnicity/are "respectable people" etc..
Is there any non-depressing source of science journalism left in the world?
I wonder how readable is to someone who isn't experienced in Haskell. To me reads like a breeze, but I have a project using the exact same parsing library so maybe that puts me at an advantage.
The language-c library he uses is an excellent one, it's a fully spec compliant C parser that's well maintained. I've based my C compiler on it and I haven't encountered any C code it couldn't parse yet. One time I upgraded to a new OSX and Apple added some stupid thing to their headers that broke the parser and a fix was merged within days. This means it takes away the entire headache of parsing C leaving just the actual compiling.
In theory I'm a big supporter of Rust. I strongly feel that we should be using stronger-typed languages than C for developing security-critical applications, and from the outside, it looks like Rust solves a lot of my criticisms of C without giving up any benefits of C. A transition over to Rust could be a big win for security and reliability of many systems.
However, I'm reluctant to devote time to learning Rust primarily because it's not supported by GCC (or any other GPL compiler that I know of). I hope the next cool thing that that the Rust community does is to continue the work done by Philip Herron on a Rust front-end for GCC. I know the classic response to this is, "Do it yourself!" but there are too many other areas of Open Source that are higher priorities for me, so sadly this will have to be done by someone else if it happens at all.
Best examples: rake (and everything in the ruby ecosystem basically), the amount of people touching ruby c code is very small compared to all the 'standard tools', or cargo.
I want to see how my new rust code base looks light, does it compile with some heuritics, or just 1:1 C to rust primitives?
This was my immediate concern. Is there any chance this tool can produce anything close to clean, safe, idiomatic, rust code?
On the one hand, you could use CSmith with a C compiler as a convenient oracle, but on the other you would only be covering a very limited subset of e.g. the type system.
The most interesting part of this for me is that IPython 6 will not support Python 2.
>Projects such as Matplotlib and SymPy plan to drop support in the next few years, while a few projects like Scikit-Bio are already ahead of us, and should be Python 3 only soon.
This was also very surprising for the standard reasons, especially for a library like matplotlib. Glad to find Python moving forward. But what will companies stuck on Python2 do? Will libraries like numpy, matplotlib, and scipy all maintain a Python2 LTS?
On the other hand, I'm really happy to say farewell to readline. I've been stuck with readline v18.104.22.168 for ages just so I can have proper linewrap . Of course this breaks virtualenv creation so you end up having to override the system readline . Needless to say, this is well overdue.
The way I understand this is that you will need python 3 to open ipython 6, but once running, you can interact with python2 and run and inspect python 2 code.I think that is fine. I can't imagine a modern scientific setup that can't readily create python 2 and 3 virtual environments.
The best feature I've discovered so far, is that when I want to change a function, I can simply 'up arrow', and I get the whole thing! Not a single line of the function, as in the normal python terminal. And if I write a syntax error while typing the function, it tells me immediately!
Does anybody have other examples of great features in ipython over the standard python terminal?
I want to execute through lines with cmd-return, or by highlighting and pressing cmd-return, and then see the change in the variables in a separate pane, like RStudio's environment pane. Bonus points if I can click on table variables in the environment pane and examine them in a separate tab with sorting and searching. Spyder comes closest but the execution part doesn't work as fluidly.
I was waiting for something like this for years!
Unfortunately, last time I checked development (on issues blocking the initial stable release) seemed to have slowed as of late, but I should be helping out instead of whining - the Rust community is doing great work!
Tongue-in-cheek, it's very exciting that distros will now have an easier time patching Rust and producing bugs like this one:
So, does that mean that Rust (has/will have) issues regarding binary blobs in pure free software Linux distributions?
I actually use Rust in production, and have generally found it very good - compared to the well discussed difficulties with CPP. I will continue to do so, but I think that the edges really need covering off properly before it'll be treated as a serious competitor to CPP (and equally often, Go).
I like the way how Nim handles the bootstrap. It's always easy, and it also eases ports to other platforms significantly since everthing is coded in C.
Very nice to see the push for relying only on stable releases while building Rust.
Real estate is super expensive & the owners couldn't afford to sit idle and play games with the government so they decided to run the mall from morning till night on diesel generators. That's ~ $4500 of diesel each day, probably 6-8k litres. This is an example of a pollution source that's completely avoidable. I'm not sure for how many years this continued on.
What's worse is that there was no widespread public outrage. Why didn't people boycott the mall that's polluting their city and at the same time put pressure on the government to set things right ?
For a perspective, the city I'm talking about is a modern affluent city, close to Delhi (~ 160 miles). Home to a lot of politicians & celebrities, one of the most well planned cities in the world  & one of the cleanest in India 
When I went to India, I though to my-self the living standard in the villages were HIGHER then the cities.
The air in the villages were clean - it was much cooler weather due to not being trapped in the congested cities.
The village people though I was crazy to think that their village was a paradise, and everyone wanted to move to the city.
I think its a serious lack of education that India/Bangladesh will not learn easily. A large number of people need to die due to cancer for it be taken seriously, unfortunately.
We are talking about a country where chain smoking is really common !
And no I a person from the sub-continent so I am not being racist, just pointing out some of the terrible facts about why I am terrified of going back.
A city like Bangalore has almost an impossible amount of scooters on roads. Almost every house has at least one, and most homes have easily 1 for every two people staying in the home. And its not like US where there are laws on building homes. People take a 1200 sqft plot and build 4 floors, with 4 families, so at least 6 two wheelers for a small plot of land. This is even possible because auto rickshaws and buses have gotten expensive. Public transport of any kind is expensive, unpredictable and not worth when you look at the overall comfort and economics of having our own scooter.
At the other hand trees are being cut at an alarming rate. The area where I stay, around 15 years back, had a drive where students from an agricultural campus plotted a tree per home. In the time since a countable few trees remain. People cut trees for various reasons, some which are down right stupid. Reasons go like- A big tree will attract birds who will in turn crap on our cars/scooters, or that chirping birds disturb their morning sleep to impossibility of cutting the tree if grows too high.
The garbage landfill are full. So the government often doesn't collect garbage on time. Sometimes even a whole month passes before the garbage is collected. So you have massive piles of trash(Medical and all other toxic waste included) piling right at the corner of the lane. This causes mosquitoes to breed, and then diseases like dengue spread. The most obvious solution people around the place work to is burn the trash, causing all this toxic fumes to now mix in the air and reach almost everyone's lungs in the area.
On top of this comes industrial pollution. Rivers and lakes are being polluted, encroached and destroyed almost everywhere. Bangalore's lakes have almost disappeared. Many remaining are now cesspools. There was a lake which caught fire recently.
India needs something on the lines of Clean air act, and clean water act urgently. Feasibility of implementation remains a problem though.
You haven't experienced air pollution if you haven't breathed in the evening air in Mumbai or Bangalore during the rush hour.
To be fair though, like in any Indian metropolitan, there is immense pressure on the infrastructure that takes solving such problems to the next level. The sheer growth in population(primarily due to immigrants from other parts of India) on a daily basis would put any government in a quandary.
I am trying to come out of this issue at all. At the same time I don't want to leave my lovable tech job.
Anybody experienced the same issues? Any advise ?
This seems like a promising opportunity for startups to tackle...I wonder if there are any out there, would love to hear more.
And if I scroll down a bit more you'll find a web worker spawned by the same page consuming another 120MB of RAM. This is a lot of stuff happening on what is suppose to be just a news article.
Smolin also doesn't like many-worlds, because it talks about unreachable regions. This he considers too speculative. There's a basic problem in quantum mechanics, which leads to Schroedinger's Cat, the Copenhagen Interpretation, and, in the end, many-worlds.
Physics has been stuck on this problem for almost a century now. Philosophy won't help.
String theory isn't pseudoscience, because it does make testable, falsifiable predictions. In fact, it makes the same predictions that quantum field theory does for the phenomena that we are currently capable of testing (and arguably using a more elegant framework than QFT provides). The problem is that the predictions that string theory makes beyond QFT are currently way outside the realm of experimental assessment. This doesn't make the predictions false if that's the way the universe really is, we just may be incapable of knowing that for a long time. There are a few hints that at least some of the predictions of string theory may be incorrect we have yet to find any evidence of supersymmetry, and many physicists thought the LHC would turn up at least some evidence of this if it existed.
Regardless, I think people get too hung up on buzz-phrases like "is time real?" or "are we in a multiverse?". A lot of these are what I consider irrelevant to science, as the goal of science is to make accurate predictions about the future. If there is another universe out there, it is by definition unreachable from our current universe, so it makes no difference to science whatsoever. Likewise, what is or isn't "real" has no bearing on how accurately one can compute the evolution of some system's state through time.
I understand that dark matter is used to explain gravitational irregularities when observing galaxies. However, saying 'there is x amount of unobservable stuff there' in order to make our calculations correct seems like lazy science.
Are there any theories that don't include dark matter or energy? Can gravity function differently depending on its location (space or time) in the universe?
Perhaps when we have some alternative theory that survives that basic hurdle, we can discuss whether we're wasting time pursuing string theory rather than some other idea.
But those models are correct only in what they describe, they don't completely describe nature. They are just that, models. Every time science has progressed, it's either an "exception to the rule", or a re-understanding of an incomplete theory.
I'm not a scientist, but I vividly remember how a physics professor warned us about the laws he was teaching. Those laws work well, use them, but never pretend that they're universal and that they will be for the next 20 or 100 years.
You can see this problem in quantum mechanics. Quantum mechanics is being very hard on philosophy and certainty in general. Up to a point, using models might be a limit to physics.
It seems like there's some lack of imagination here, like physics is "done", there's nothing more to answer, it's all just up to other sciences to answer the rest of the (many) questions about why the world is what it is and how it works. A reset of approaches is interesting, but it still needs to be in service of something, doesn't it?
As for Mathematics being selectively real, I agree with that totally. Mathematics is just a language we use to describe models and their properties in a most rigorous way. That's all. Can we find a model of the universe from which we can derive all its properties? Unlikely (see Gdel), if so the model would have to be pretty "primitive" and thus too large for comprehension. But so what, we can find great models for certain aspects of reality, and I don't see a limit for how far this can be taken to get better and better models for more and more areas of reality.
My take - the only way the universe could have got here from nothing as it were is if it's basically maths and seems real to us. I mean what does basic particle behavior look like - a bunch of maths and not much else, and what exist without needing creating - mathematical patterns and relations and nothing else. Hence from observation it's probably all maths.
1) There is only one universe. - Nah probably all mathematically possible universes seem real to their occupants.
2) Time is real. - Sorta but more like how time exists in a DVD of a movie. Or as some guy wrote "...for us physicists believe the separation between past, present, and future is only an illusion, although a convincing one."
3) Mathematics is selectively real. - Nah it's real.
From my limited understanding, time is merely the measurement of change. A comet goes from point A to B, and like a screen flickering, it is in a discrete point in space every step along the way. It is time that is introduced when asking ourselves, "how long did it take?" or "when was it at X?" but how can it be woven into the fabric of the universe? If we take reality to be right here, right now, it doesn't seem to exist but rather is a human invention to help us map change.
Quoted from the book in the article:
"Our mathematical inventions offer us no shortcut to timeless truth... They never replace the work of scientific discovery and of imagination. The effectiveness of mathematics in natural science is reasonable because it is limited and relative."
I'm not sure what the authors mean by this. Are we to ultimately abandon our attempts at a consistent, logical explanation of what we see? If so, that's when we revert to un-scientific and, essentially, religious ways of thinking. I'd argue that the pursuit of a mathematical explanation of observations is the work of scientific imagination. Mathematics is merely the most reliable way we've devised of communicating what we imagine to other people. And sometimes the math, itself, sings to you and that drives advances in understanding that are totally valid.
Stop being impatient!
The way I see it:
- Consciousness (perceptions, feelings, thought) is the only thing that really exists.
- There are patterns in what we perceive, for example, when we drop a rock, we see it falls and hear the impact.
- So we develop models to describe patterns in our consciousness. Physics is just a description of those patterns. So the physical world doesn't really exist, it's just an idea in our minds.
What? How is this helping?
If it can happen in physics, it can happen anywhere -- maybe even your neighborhood.
Yessss, keep feeding me more of those delicious downvotes.
ok, except what else just has 1 of them
ask a photon that
Other thing that should be mentioned: a lot of initiatives at a lot of companies over the last two years just didn't work.
Feels a bit like we are at the tail end of a pretty big macro cycle of tech companies green lighting big initiatives with optimistic mindset and now a few years later the due bill is coming and the bets just didn't pay off.
Maybe a big example would be Twitter: a few years ago they were straight up hoarding engineers; fast forward to now, what did all this amazing engineering talent get them? Maybe nice code but the engineering they've done hasn't been able to increase users.
The reason engineers get paid well is because of their extreme leverage: just a few engineers can pull off amazing results. But you eventually need the results. And if the results are not there, broadly, then there will be a reentrenchmet.
I suspect that jobs being posted to smaller niche job boards (such as Authentic Jobs, which I hadn't heard of until this post) is down overall, but this doesn't reflect the state of job availability.
I believe larger job sites are having enough of a network effect that these boards are becoming less relevant. Most tech job seekers I talk to are going one of four routes.
1) LinkedIn (Facebook for job seekers)
2) Indeed (since it aggregates)
3) Direct application (when you know a specific company you want to work for)
4) Recruiters (Let them do the work for you...)
On the anecdotal front (within the past year) I have been contacted twice by recruiters that took me out and bought me lunch to try and woo me to another job and once contacted directly by the CEO of a company who did the same thing. I want to make this point clear; I'm no one special just a regular software dev with 15+ years working experience.
During one of these lunches the recruiter said to me, "It is so difficult to hire qualified developers that I would say there is a NEGATIVE unemployment rate currently."
I have spoke with multiple company owners, CEOs, and others who are in the hiring position and the general consensus is they can't find qualified help.
This is just my experience.
Even then, I'm not sure "tech hiring is down 40%" would be a reasonable conclusion to draw -- it's like saying, "the newspaper only had 15 job listings in it, the American economy must be in the toilet."
For the jobs that I have applied to, I would say I am pretty well qualified. I have put a lot of time into my resume to make it clean and legible. I usually spend over an hour writing a cover letter, and I am pretty confident they are interesting, well written, and have a friendly/personal tone without coming across as awkward or over eager.
Despite all of this effort and being pretty well qualified, I never hear back. Not even a rejection letter. Just silence. I have applied to pretty entry-level positions too for which I am overqualified, and also never hear back. Sample size is only about 10 applications, but they were very focused, well matched, high effort applications.
If there is any shortage of web developers out there, companies sure aren't acting like it. It's not even like I would ask very much, I only want maybe $75k since I am currently way underpaid (about ~$50k in a big city).
My guess is that this is just a side effect of unemployment being pretty high (if you look at realistic measures, not the biased official ones). Senior developers are willing to work for less than before because getting a job is harder these days, and less-than-senior developers can't get a job at all because there's only enough spots for the senior level ones.
Here is the data dump:https://github.com/aparij/soCareers-Data/tree/master/new
There are a very large number of reasons that the number of jobs on a job site could drop, included to but not limited to the fact that people have stopped using OP's very small job site. (It is also suspicious why the OP compared it to 4 other sites which are unlabeled)
That's partially why blogs use large job sites to try so atleast the change is somewhat statistically significant. (but still could be caused by noneconomic issues. From the little bit displayed in the charts, seasonality is in play, which makes looking at a Jan-Jun horizon flawed)
The big tech companies are still hiring but being much more selective and limiting how many they hire. By the time some of them start having layoffs, the tech economy will be in free fall for a number of months and it will be too late if you aren't prepared for it.
Many larger companies are starting to put in place policies and procedures that will limit the number of promotions, raises, and bonuses that they give out. These same actions are also intended to expose lower performers more quickly so they can be identified and managed out.
My advice is this: find ways to continue to learn and grow, and keep building skills and making things. Save for potentially long periods of unemployment, and be willing to work very hard to ensure that you're not perceived as a low performer.
Lastly, don't sweat it. These economic corrections are good thing. They can unlock a lot of latent talent, capital, and resources that are being wasted. Just be financially prepared for it and you'll be fine.
Couple of things that I noticed
- more channels - There are at least a dozen of startups like TripleByte, interviewing.io
- referrals becomes preferred way of hiring. From small companies where majority of hires are made through personal networks to fortune 500 companies (using an external recruiter is a last resort).
- AngelList, Stackoverflow Careers have matching features and you can learn more about the company.
- This is an employee market. Companies have to proactive to get employees (hence the 'poaching'). Posting something to a job board is not efficient.
- More acquihires by US companies happens in Europe, since they have hard time competing with Apple, Google, Facebook etc.
First two points are backed by data.
The chart suggests that he compares AuthenticJobs to similar job boards ~200 job post per month. Stack Overflow Careers is a different league. Numbers averages there 1800 posts per month.
Also, I wrote a counting script for Hacker News' "Who is Hiring" and the results suggesting opposite as well. Plese check this chart https://blog.whoishiring.io/hacker-news-who-is-hiring-thread...
Disclaimer: I run https://whoishiring.io I scrape them all.
Private opinion. Statements like his are trying to produce shitstorm in an obvious way. There is no bubble where I stand.
I now have a list of ~300 recruiters email's that I can give out to anyone who is looking for work :)
Hiring is down across the board; and it is indeed due to poor economic headwinds. Many finance heads are starting to think that all our bailouts in 2007/2008 did were create an asset bubble down the road in equities by providing access to cheap capital, encouraging borrowing for risky investments that are just now starting to sour.
Add to this the slowdown in China, Brexit, Donald Trump, and a volatile oil market and there's just too much uncertainty for companies to staff up right now. Some firms are even quietly issuing preemptive layoffs under the assumption that 2017 is going to be a very bad year a la 2007.
That said, there's still plenty of work for tech workers (someone has to implement those cost savings projects, amirite?). But look for middle managers, marketing folks and sales people to have a bad time as companies look to trim payrolls.
Anecdotally, I would say it's also likely that tech companies that ARE hiring are shifting their hiring budget to sources that bring in more qualified candidates. AuthenticJobs is one of the better niche tech job boards, but it's still a job board. We see lots of growing companies that are spending less money on older channels like LinkedIn and job boards and instead, spending money on an applicant tracking system, hiring in-house recruiters, and using services like Entelo, TheMuse, AngelList, Hired, etc.
*We run Underdog.io.
Many companies, SMBs in particular, are discovering they don't need to "own the cow to drink the milk".
It's simpler to post tasks to sites like Fiverr than it is to post a "job description" and hire an employee/contractor.
Correct, they are occurring outside of tech, in fact you don't need to guess, it says it right there in the report if you read it :
"In June, job growth occurred in leisure and hospitality, health care and social assistance, and financial activities."
The largest increase last month was in "Leisure and hospitality" which added 59,000 jobs. This is basically the hotel industry gearing up for summer which is when most Americans travel for vacation. These jobs don't produce any real economic growth. The economy is not recovering.Today what the media failed to mention is that the unemployment rate actually went up 0.2% to 4.9% (although the more accurate number to look at is the U-6 which went down from 9.7% to 9.6%). Also Average Hourly Earnings m/m went down from 0.2% to 0.1%.
If this had been the usual analysis of one of the large job listing aggregators, then I think more of the arguments made here would come into play, but I give this one slightly more credibility since it's a for-pay and "curated" job site.
So I'll put it in the somewhat-more-interesting category.
Back to the author's points, of course perceptions help frame reality, too.
When I landed a job (a good one) it was through a posting on the Who's Hiring thread here on HN, and subsequent email/phone conversations and co-working time.
Every major election year the job market does this. It's worse when it's an 8 year cycle.
Add into it the uncertainty around Trump and Brexit causing economic worries... it's basically there is less risk, so less VC capital, so less startups.
It's a normal cycle.
1) There has been some batten down the hatches.
2) There is migration among boards. (Example: Indeed has become much more relevant than during my last job search. As a hiring manager I need to pay more attention to it.)
With investors focusing more on hard fundamentals like revenue and profit and a lot less on "hype" metrics like user growth, employee growth, number of baristas on staff and such a lot of tech companies have taken the hint and got down to business. That includes slamming the brakes on hiring and in some cases cutting staff. That's not the only force at play, but it's a big one.
What you need, is a resum database in which you can search.
In fact, I would personally never, ever react to a job advert. Thousands of people, who do not have the skill set whatsoever, will react too. That will give a completely wrong impressionof levels of competition that do not exist in realityand drive down compensation for real candidates, who therefore bail out pretty much immediately, or even never react.
Seriously, there is often a good reason why these other people are looking for jobs. It is always the same people looking for jobs. The people whom you really want to hire, are usually not looking for jobs.
You should reasonably assume that people who can really do the work, are already working. The discussion then revolves around why would it be more interesting to work for your company and how much more are you willing to pay for that?
Wisen up and stop wasting your time and money on job adverts.
What they really need to do is factor in the probability of encounters with police gI've race. That is, we want to know, out of all encounters with police, are they more likely to be fatal if the victim is black?
Without factoring in the rate of police encounters, the conclusion could just be indicating that black people are more likely to encounter the police, which is a problem in itself but is slightly different. That would point more to socioeconomic factors determining the rate of policing in neighborhoods, rather than police racism.
But let's not make the mistake of looking at this study only in isolation. It is a recent addition to a large collection of observations and evidence that support a theory that personal racial bias affects American policing.
The evidence includes other studies, criminal investigations, criminal cases, federal investigations and reform agreements with police departments like Cleveland and Seattle, videos and photos of violent police encounters, and of course decades of stories and statements from minority communities about how the police treats them.
The last one is important because it gets at trust, which is the heart of the issue. Minority communities, many of them, do not trust the police to protect them in the same way they protect whiter/richer communities, and they have stories that explain why not.
If you are depending solely on data-driven studies to inform your opinion on racial bias in policing, then you're implicitly saying that you distrust or reject what minority people and communities say. Why is that? It's worth thinking about IMO.
Which brings us back to the data. Why is it so lacking? You can't answer that question without coming back around to bias, because until recently, it was the police forces themselves who supplied the data, or not, or only part of the data. So discounting the bias reported in this study because of data problems is getting toward begging the question, logically speaking.
The essential question, when it comes to whether you agree that racial bias affects policing, is: what level of evidence will convince you?
Edit: the author acknowledges the incompleteness of the data in the conclusions section. Oddly, he doesn't think that's likely to affect the mean "since the sample used herein is a large and random subset of the to-be-completed data set". That doesn't really make sense to me. How would random sampling of incomplete data improve the results?
With the disclaimer that the conclusion might still be correct, I think looking at the county level is completely absurd. You leave yourself open to the Simpson's Paradox at a neighborhood level.
For argument's sake, let's say that the majority of police shootings happen in poor neighborhoods. Let's also assume, sadly, that the ratio of black/white people in poor neighborhoods is high.
Their analysis would imply that their is a racial bias to the shooting, when in fact, the racial bias could be entirely explained by the demographics. Or it might not, but doing it at a county level completely washes out all useful signal.
In my mind, I tend to assume that criminals, active or former are more likely to be shot at than non-criminals, whether or not they are armed. I'd really like to see the data normalized against prior convictions or in-process-of-a-crime stats; that would help me understand:
1. Is the effect magnified or dampened by some sort of differences in black and white criminality in these areas?
2. Are these shootings happening while people mostly commission crimes, or are they, a-la Minnesota this week, something that appears to be just wholesale adrenaline-based killings by police officers?
Factors which did not affect police shootings- Local level crime rates- Race specific crime rates
Crazy takeaways- A black unarmed individual 3.49x more likely to be shot than a white unarmed individual on average across america.- Some counties showing 20x more likely
I'm interested to see this data in relationship to gun accessibility and gun ownership stats. Would less access to firearms affect police shootings? Is there a racial connection to gun ownership and carrying?
I'm not american and the idea of civilians with guns seems just so crazy to me.
The question we should be concerned with should be: 'How is policing/governance structured in a way that enables or encourages people to act upon their biases to detrimental results?'
The distinction is important because eliminating bias/whateverism will never happen, but making it possible for the justice system to operate fairly given the biases of its constituent members should be a desired outcome.
Am I missing something from their methodology?
Across almost all counties, individuals who were armed and shot by police had a much higher probability of being black or hispanic than being white. Likewise, across almost all counties, individuals who were unarmed and shot by police had a much higher probability of being black or hispanic than being white. Tragically, across a large proportion of counties, individuals who were shot by police had a higher median probability of being unarmed black individuals than being armed white individuals. While this pattern could be explained by reduced levels of crime being committed by armed white individuals, it still raises a question as to why there exists such a high rate of police shooting of unarmed black individuals.
1. Whether suspect is unarmed is only known for certain after the incident.
2. Differences in levels of crimes by race.
Police should not be using deadly force unless necessary, 41:1000 seems like some pretty bias reactions on the side of police.
The reason different races are shot at different rates can be based on anything including racism, likely hood to commit an offense, which race is more likely to have mental disease, whether or not it's more difficult to identify facial structure. Blaming race outright is kind of silly, it's trying to simplify a multi dimensional problem that needs all of it's dimensions to reach a conclusion.
Recent MN incident is definitely horrible and seems unequivocally wrong, no questions about it. It's very sad. It's worse than Garner and totally different scale than other high-profile cases of late. It's the sort of incident that definitely supports claims made by BLM (as previous ones have not, at least to me).
The clear racial implications prompted me to look at the overall country-wide figures to see if the actual stats for last few years reflect the narrative promulgated in the news (black people are disproportionately killed by the police).
1. "... roughly 49 percent of those killed by officers from May 2013 to April 2015 were white, while 30 percent were black. He also found that 19 percent were Hispanic." (http://www.washingtontimes.com//police-kill-more-whites-t/)
2. "There were 511 officers killed in felonious incidents and 540 offenders from 2004 to 2013, according to FBI reports. Among the total offenders, 52 percent were white, and 43 percent were black."
The ratio of african-americans among cop killers (43% of all incidents) to those who are killed by cops (30% of all incidents) is 1.43 - which does not bear out the claim that they are unfairly targeted as a group (of course this doesn't absolve the individual cops who wrongfully kill innocent people).
The one obvious problem with this analysis is that the set of people who kill cops vs are killed by them are non-overlapping - but when country-wide stats over a few years are considered the numbers would sort themselves out (ie random white guy killed by mistake simply doesn't make the national news).
I'm sorry if this analysis is unpleasant and welcome criticism of why it could be wrong.
1. "In contrast to previous work that relied on the FBIs Supplemental Homicide Reports that were constructed from self-reported cases of police-involved homicide, this data set is less likely to be biased by police reporting practices."
I'm very interested to see if the full article (which is timing out so I can't check) goes into detail on what the reporting practices are, how they are biased, and how this data set solves those biases.
2. I'm curious how this data compares to the Guardian's study: http://www.theguardian.com/us-news/ng-interactive/2015/jun/0...
In particular, this database shows that in 2015, while the "per million" count of blacks killed was 7.27 (and for whites was 2.93), the "in total" count was 306 for blacks and 581 for whites. The statistics in 2016 so far are not much better: 3.23/1.41 respectively per million, and 136/279 in total. Ideally this number would be 0 for all counts, but we don't live in that world.
3. This report was published at the end of 2015, and unfortunately we have seen a massive spike in killings since then. Further, their dataset (according to the title) is only from 2011 to 2014. Is anyone working on a follow-up study using more recent data?
Different ethnic groups commit crimes at different rates - even if their is bias in apprehension/monitoring.
The question they should try to answer is:
'Among interactions with police, how much more or less likely are people to be shot'.
This way, we can ignore the possibility that cops are unfairly focusing more on blacks, and isolate and at least assess how much more likely someone is to die.
It seems that academia even has a problem with trying to get at the truth and heart of the matter.
This article is attracting some distasteful comments, generally seems to be headed in a bad direction, and I want no association with it.
I am no statistics expert, so just curious, anyone here knowledgeable enough to read into the technical details and comment on how good the study quality seems to be?
> Ecological regression on county-level characteristics is plagued by difficulties theoretically [39, 51]; issues with data quality make it even harder to use county-level data. In the analysis of county-level predictors of racial bias in police shootings conducted in this paper, some of the data were low quality. Notably, the crime data may be biased by the reporting practices of the police, and Florida, Alabama, and Illinois failed to fully release data, which led to the use Bayesian imputation for counties in these states.
I'd like to see BLM take this on as their a demand during their next direct action: better data for Cody Ross @ the Department of Anthropology, University of California, Davis
Number of gun deaths: ~13,000
Number of police officers killed: ~50
Number of civilians killed by police: ~1200
Number of white civilians killed: ~500
Number of black civilians killed: ~330
Percentage of blacks in the population: ~13
We have a large death-by-gun problem in the US as compared to the rest of the OECD.
How much does formal verification matter/increase confidence?
Is the rust version easier to integrate with other stacks?
Everything you Never Wanted to Know about PKI but were Forced to Find Out (https://www.cs.auckland.ac.nz/~pgut001/pubs/pkitutorial.pdf)
Writing a cryptography library from scratch because the old one has too many security holes? Your implementation is likely to have even more.
We should really concentrate on making one implementation secure, instead of creating more libraries.
A door that only lets cats in, based on image recognition.
As an aside, I've really enjoyed the particle photon so far. It was a little wonky at first when they didn't have persistent storage of state changes, but now that's up and running, it's flawless. It runs the lights in my house (via a relay just like OP) and has recovered from a few power outages with no attention necessary from me.
But instead of cats, I want it to detect Fedex and UPS delivery drivers. And instead of turning on the sprinklers, I want it to ring my doorbell so that I know there's a package sitting on my front porch.
I bet people would pay real money for a system like this.
They worked reasonably well, but the cats have learned that if they run quickly past one, it won't hiss. So now I'm thinking of modifying them so they use an IR beam, and a beam interruption would trigger the hiss.
The eternal battle goes on.
edit: The title of the submission has been changed; originally, it was something like "Using Deep Learning to Keep Cats off the Lawn"
1) A lot of scheduling friction will disappear. The week just after demo day is usually crazy because hundreds of founders and investors are all trying to schedule meetings with each other, and there are inevitable race conditions that lead to a lot of rescheduling and wasted time. (E.g. I email three founders with 5 possible time slots, and they all reply and ask for the same time slot.)
2) I think this will be great for investors who act quickly and go by their gut. There are plenty of investors out there -- especially those who write smaller checks -- for whom 20 minutes will be enough time to make a quick decision.
3) I'm not sure if this will be great for investors like me that approach investing more methodically rather than with their gut.* I love 1-hour meetings because that's plenty of time for both sides to dig in and learn a lot about each other. Twenty minutes feels very short to me, and I'm not sure if a 20-minute meeting is more likely to save me and the founder from an unnecessary 1-hour meeting, or if I end up having just as many 1-hour meeting -- but now with an extra 20 minutes tacked on.
That said, I don't want to judge this process before I try it at least once, and I'm looking forward to trying it out in August.
* FWIW, there are great gut-based investors and great methodical investors, and I'm not implying either approach is better.
These seems to be an attempt to reinforce FOMO to force key people to attend demo day. Resorting to these tactics implies a pretty big perceived power imbalance... Eg. is this a reaction to senior partners sending their underlings because of DDay burnout? If so, this could backfire. The senior partners might just not show up (still), and force founders to attend offsite meetings later anyway (in addition to the new DDay meetings). In other words: This might just be a net increase in founder effort without changing the investor-founder DDay dynamic. After all: What do the senior partners gain / lose by this new situation? Not much (aside from FOMO), I'd guess. But the best investors will always be in demand anyway -- whether they attend DDay or not.
I can imagine many firms go into demo day with the resources and willingness to fund 10-20 ventures, whereas others are looking for only one or two.
I would much rather be the fifth choice of the former than third choice of the latter. If I knew my position in the stack I could figure it out for myself, or it could be calculated for me if YC is trying to avoid the negative consequences of making that transparent.
Investors would love it because they now have visual confirmation of who all are the "hot" matches. I bet everyone other than the Sequoias would be straining to look at who the hot startups are...and completely ignore the ones in front of them.
For the long tail of a YC batch - the ones that are not hotly contested - this could be a disaster. Previously, investors would be forced to actually look at a startup and decide in isolation. Now they can simply look at the Big VC.
I can completely see why investors would love this.
One might as well make public the interest match list and rank it by order of "likes" received.
People are perfectly capable of driving up and down the Bay Area. Guess what - we get a few free meals and coffees out of it.
EDIT: guess what, you can bring associates to tailgate Sequoia & A16Z investors.
EDIT2: what you guys might be trying to do is be helpful. For example, this lets you force-schedule investor meetings for startups that had no investor interest and term it as "the AI did it!". But I'm not sure if that will be really helpful ... at the cost of drastically reducing FOMO factor for most other startups.
1) can investors see that a startup is not at the venue or at a table all morning and get a feel for demand?
2) can investors gather any information from the matching results to get a feel for demand?
3) is there a chance for investors to send false signals by showing their interest in other startups artificially? Vice versa for startups at all (would require collusion so unlikely)
2 and 3 not so much but regarding number one, will physical observation of the meeting space introduce any signaling oppurtinities be it genuine or fraudulent by either party?
If the algorithm used by yc to produce the matching for each slot is based on these, which algorithm is used? The one that gives priority to women's choices or to men's choices? A variation that doesn't give priority to any gender choices?
Sounds like the backend of a dating site for startups and investors.
A study on speed dating showed an increased opinion from the person that approaches the other. Intentional?
Did you even win the Putnam, if not then please don't be bolder than the parent poster.
I feel reckless writing C/C++ code if I can't test it with Memcheck.
I recently had to debug a memory usage issue at work, and SystemTap seemed like it would be a lifesaver in those and many other occasions. Unfortunately, both myself and my coworkers were inexperienced with it and there wasn't much documentation about it online, so we ended up using a standard profiler to track down the issue, which turned out to be a much slower process.
It gets you something closer to what the program looks like at the source code level: You get calls to printf (well, __printf_chk on a modern Linux) instead of write, for example. The downside is that ltrace doesn't (and can't) know as much about every single function in every single dynamic library, so, while the names are there, the arguments are typically less convenient to work with and may be incorrect. (For example, it doesn't dereference pointers to print out nice strings, and it might not know how many arguments a function takes.)
Personally I also like to extract data from logs with UNIX tools and feed them into csv/tsv files, then process them with R.
33.2% thanks to Bayes Theorem.
If you have a test with 99% accuracy and 1 in 200 people use the drug, someone testing positive more likely than not is not using the drug. If the drug is less popular, that percentage drops further at a fast rate.
If I kill someone there's probably something wrong with me, and it's in everyone's best interests to repair me. If I know that someone I care about is going to kill someone, I should be able to turn them over to the authorities without having to worry that they will almost certainly be abused and returned in worse shape.
Having the term "fuck you in the ass prison" in the common vocabulary and used semi-humorously or as a reference to the normal functioning of the criminal justice system is exceedingly troubling.
This. We place too much trust in confessions.
How something like this isn't happening long before a plea bargain or trial enters the picture is just beyond me.
Whats the solution here?
This is an embarassing failure of the justice system and all involved. Why is it so acceptable for them to do such a bad job?
> Albritton was charged with felony drug possession
WTF guys, seriously? Possession of a crumb on a floor?
It's not a drug test that's faulty here.
> Albritton was escorted to a dark wood-paneled courtroom. A guilty plea requires the defendant to make a series of statements that serve as a confession and to waive multiple constitutional rights. The judge, Vanessa Velasquez, walked her through the recitation, Albritton recalls, but never asked why she couldnt stop crying long enough to speak in sentences. She had managed to say the one word that mattered: guilty.
This is called a "plea colloquy." An example is here: http://www.vawd.uscourts.gov/media/1966/guiltypleacolloquy.p.... As you can see, it's a conversation in which the judge must assure herself that the defendant understands the charges against her, understands the consequences, and that the prosecution has facts it is prepared to prove at trial (see question 31).
At least in federal courts, these are very detailed and take quite some time. I don't know how things are done in Harris County, but an insufficient plea colloquy is grounds for having the guilty plea set aside later, so judges have strong incentives to ensure their colloquies are adequate.
So not only is it true that "guilty" is the "only word that mattered," the article's implication that the judge is only interested in hearing the word "guilty" is totally at odds with not only the judge's incentives, but the prosecutor's. What "tough on crime" prosecutor looking to rack up convictions would leave the door open for a conviction to be vacated just because the defendant couldn't get through the colloquy?
This could not be less true in today's America. I encourage everyone here to read The New Jim Crow (and books like it). Then you'll see how deep the mass incarceration problem is.
My sense is that if you are going to go to the Bay area, and put up with those high high costs for everything, you need to make it pay. Spend your weekends doing unusual stuff that you can't do elsewhere. Take jobs that don't exist elsewhere. And if that doesn't sound like your thing, don't go there in the first place.
Then you look at the swaths of tech workers who were there at the early stage. I've seen multiple large exits in startups. Places I've worked at or have had close friends. People who were there at the seed stage with barely a MVP, who built products, pulled all-nighters, and worked 7 days a week. Upon acquisition, the VCs made 10s of millions, the founders made 10s of millions and the employees got new car money (for 2 years of intense work getting paid well below market).
EDIT: Another interesting point that is a bit too off-topic and lengthy for this comment is that most startups simply don't realize the value of (5+)x engineers. I've worked with people that are astronomically more efficient than entry or mid-level engineers who make 100-130K, yet they make around the same amount. Its very worth it to find these people and pay them 2-3x market and retain them. I'm not talking about algorithms wizards (although these people tend to be decent algorists too), I'm speaking of engineers who can work on all levels of the stack and ship code that is simple and works. They understand business needs and don't get caught up in self-gratifying projects. They use a mix of new and old tools, selecting them for reliability and efficiency.
> There is more opportunity for tech professionals in more places than ever before, > wrote Terence Chiu, vice president of Indeed Prime by email, citing cities such as > Austin, Boston, Seattle, and New York City.
Seriously, in the age of the Internet, and of looong traffic jams on 101 and on the various Bay bridges, if an employer insists every programmer has to hang around in that area something is off in their thinking. As someone who did hang around there (during the dot com boom), at several companies and visiting many more as part of the job, it is overrated, especially for programmers. Sure it's better to be around the coworkers everything else being equal - but everything else is not equal. The costs of doing so (not just monetary) is very high.
It's a beautiful area alright, I lived in the Presidio at the end (that's the huge park right next to the Golden Gate bridge), perfect. But not all people can live in the same place... (PS: By the way, the East Bay has great places too! The Oakland hills near the top, for tens of miles, have some of the most beautifully located properties in the entire Bay Area. Plus endless parks and trails and horse riding, etc., not to mention the incredible views. And in Oakland visit Jack London Square and then walk downtown.)
Seems like Uber and Snapchat were the last game changing companies for how people interact with the world via tech on a large scale. IMO it's lack of media attention, excitement and companies that are very obvious lifestyle changers.
Indeed, Indeed seems to attract the kind of engineers most likely to give the kinds of responses seen in their survey, like preferring large established brands over startups.
This is less likely (in my opinion) to indicate a shift in engineer preferences than it is to reveal the skew of Indeed's userbase.
Disclaimer: I work at Hired, a direct competitor to Indeed Prime, and live/work in SF. (Also note that although our customers often use competitors, they typically cite Indeed Prime as less effective for startup talent than much smaller competitors).
As to the startup thing. The tech bubble hasn't burst, yet. But valuation of startups, big or small, are already taking hits, which means a large chuck of employees' potential salary have evaporated. I would predict that startups are going to have a hard time attracting the best talents out there, on the other hand, they might retreat their appetite for expansion since the growth is slowing.
I mean what sane person thinks any of that is good?
Maybe its because I have a good job that affords me some freedom, but as a married man soon welcoming a baby, Silicon Valley holds no appeal for me - but then again, tech isn't my life, it's only a portion.
Factors in that statement include cost of living and commute times - both low for me in the Utah Valley.
It probably really does matter that I hate commuting and I'd like a yard.
Try scaling to 100+ engineers in Pittsburgh or Chicago. Try meeting VCs daily I between Series B and C. It's harder everywhere else than in Silicon Valley.
The area also has an energy that comes from high energy educated transplants that is hard to replicate. I've seen it in New York but not as much elsewhere in the US.
Clearly the Bay area, whether you like it or not, is a phenomenal place which has stayed at the top of the tech pyramid since it's inception as "Silicon Valley". It's expensive, people work crazy hours, tech bubble, housing bubble, traffic, bad infrastructure etc. But there's a reason why most major technological developments happen here and not elsewhere.
I'm not advocating that this is a place that everyone would enjoy living, but bashing it for it's negatives and completely ignoring the obvious positives, is IMO disingenuous.
The low interest in working for startups is also telling imo. Could that be because there isn't a lot of exciting startups currently or is it that developers are looking for more stability?
When you're younger, you switch jobs a lot more often to figure out what you want to do and to try something new. I wouldn't be surprised if that extended into moving more often. E.g. I want to try city living, then suburb living, etc.
Funny thing about the Bay Area, though, is that it can get hard to move around because if you've lived here for more than a couple years, you are now renting or owning a place so cheap (in comparison to market prices) that you can't justify moving.
More seriously, I've half-thought about moving to the West Coast from the East Coast a number of times. For jobs, the Bay Area takes the cake...but for remote work, there's other places along the coast that aren't cities and are delightfully pleasant -- nobody ever seems to talk about Monterrey for example.
b) The idea that "tech" is the pinnacle of social status and the industry to disdain over inequality is laughable at best, even if it is true in that area. Being "in tech" flies so far under the radar in New York City and people in that industry earn the same amounts as in SF.
c) The infrastructure in Silicon Valley is laughable. Many startups trying to "change the world" look at the world through a lens of what doesn't function well in Silicon Valley. When the rest of the world, or the addressable market, already has an adequate solution to a problem the Stanford Grad and their Sand Hill neighbors thinks exists. (Random example: Many gas stations on the peninsula don't take credit cards, for reference. Cellular internet speeds on Verizon, T-Mobile, At&t, and Google Fi are pretty slow, startups are still working on clever WIFI solutions not realizing that the rest of the country has cellular data faster than Silicon Valley's wifi)
If you live here and you're anywhere in this thread complaining about how it's overrated and you hate having to be here for work, please, get out. Now. My friends and neighbors are struggling to be in this place for lots of reasons you're oblivious to, precisely because there are so many people who actually hate it here but feel they have to be here for work.
You're not doing yourself or anyone else any favors by staying here. Move. Please.
'About half of millennial tech workers say its important (26.5%)'
The dataset used, for anyone curious.
For me, I need the change of temperature (sure it's between about 8 and 21-25C, but a variation). The lack of light in winter affects me more.
I've been through one Finnish winter and had visits to Tallinn (-30C), Stockholm (-12C) and Munich (0C) one work trip where 0c really felt like I should be wearing shorts.
What I can't deal with is hot and humid. Arrived in Germany about two weeks back, it was 33C and humid. Not enough showers would help not feeling "sweaty and dirty".
I think some variation is useful. I also miss rain.
It starts in Brownsville, Tx.
In January 2016, There were 2 days in Brownsville where the high was 70, the 13th and 28th.
Even if you say +/- 5 degrees is 'close enough' to 70, less than 1/2 the days in January 2016 fall into that range.
From the 3rd to the 4th, the high temperature jumped 15 degrees from 51 to 66.
On the 2nd, the high was 45 degrees. The 15th, the high was 83 degrees. That's a 38 degree high temperature spread in one month.
So if you really do need 70, spot-on to be happy, you may need to add standard deviation to your criteria.
Here's the chart:
Their summary of pros/cons was you may become bored.
Lambda is similar, we have 'Serverless' and I'm hacking on Apex (https://github.com/apex/apex) just to make it usable. I get that they want to create building blocks, but at the end of the day consumers just want something that works, you can still have building blocks AND provide real workable solutions.
I was part of the team migrating Segment's infra to ECS, and for us at least it went pretty well, some issues with agents disconnecting etc I sort of wrote off since ECS was so new at the time.
Another annoying thing not mentioned in the article is that the default AMI used for ECS is not at all production ready, you really have to bake your own images if you want something usable. I suppose this is maybe because there's subjectively no "good" defaults, I'm not sure, but it's a bit of a pain.
ELB for service discovery is fine if you can afford it, I had no issues with that, ELB + DNS keeps things very simple. I'm not a huge fan of all these complex discovery mechanisms, in most cases I think they're completely unnecessary unless you're just looking to complicate your infrastructure.
I also think in many cases not propagating global config (env) changes, is a good thing, depending on your taste. Scoping to the container gives you nice isolation and and more flexibility if you need to redirect a few to a new database for example. You don't have to ask your-self "shit, which containers use this?", it's much like using flags in the CLI, if we _all_ used environment variables in place of every flag it would be a complete mess.
EDIT: I forgot to mention that the ELB zero-downtime stuff was awesome, if you try and re-invent that with haproxy etc, then... that's unfortunate haha. No one should have to implement such a critical thing.
I understand these challenges. I wrote about a lot of them here:
But we have been having tons of success on ECS both for our own stuff and for hundreds of users.
I see the agent disconnection problem too. convox automatically marks those as unhealthy and the ASG replaces them.
It's happening more than I'd like but I'm seeing little to no service disruption. One of the root causes is the docker daemon hanging.
Glad Kubernetes is working well for you. Many roads lead to success as the cloud matures.
I believe this is the recommended way:
ECS container instances automatically get assigned an IAM role, with credentials accessible via instance metadata (169.254.169.254) . Containers can access that metadata too. The AWS SDK automatically checks that metadata and configures itself with those credentials, so all you have to do is give your IAM role access to a private S3 bucket with configuration data and load that configuration when booting up your app.
That way there's no need to copy/paste variables, and no leaking secrets in ENV variables. You do have to be careful though (as with any EC2 instance) not to allow outside access to that instance metadata endpoint, e.g., in a service that proxies requests to user-defined hosts on the network (but if you're doing that, you've got a lot more to worry about anyway).
Here is how we get around the issues mentioned in the article:
* Service discovery: built our own with rabbitmq (we use that before ECS anyway).
* Configs: pass a s3 tarball url as environment variable, download it in containers.
* Cli: built our own with help of cloudformation
* Agent disconnecting: we did not see situation where all agents disconnected. we use a large pool of instances, there was never an issue to start containers because of agents.
In addition to these, we also do the following to make ECS work as we want it to:
* built our blue-green deploy solution (structure provided by ECS is very limited)
* built our own solution to integrate with ELB (ELB allows only one port per ELB)
A remaining issue is that you cannot spawn two containers speaking to a given ELB (AWS load balancer) on the same host if they need to bind the same port.
Docker is stuck in the 'one image on one machine' mindset. DCOS is taking over at the higher levels of the stack. Mark my word.
Now if ECS 2.0 was really AWS hosted Kubernetes, I would be very interested in hearing about that...
We ran into similar complaints. CoreOS comes with Etcd which though initially unstable is now solid and incredibly handy for service discovery and configuration. We're using https://github.com/MonsantoCo/etcd-aws-cluster to configure it dynamically. We use etcd+confd to drive nginx containers for routing. All in all it works well. Our biggest problems are docker bug related and those we can generally handle by just terminating the node and letting autoscale heal the cluster.
Would packaging the configurations together with the docker image makes more sense? That enables more hermetic deployment.
Edit: For a more complete analysis of DB's capital position, they have published Moody's report on their credit.  I don't see much in there that would suggest DB was insolvent, but they do seem to be having some difficulty reorganizing their business.
Or the slightly more sensationalised English version:
Comparisons to $UNICORN are irrelevant. If DB melts down, 2008 will look like an inconsequential stutter.
'Worth' as defined by 'what someone is willing to pay'?
Saying Snapchat is 'worth' 17 billion euros is completely and utter nonsense.
Is there general agreement that the US government acted effectively in this matter, not just for bankers, but for society as a whole?
More on the Snapchat side, than the DB side.
In fact, due to liquidation preferences in the term sheets Snapchat's true valuation (i.e. the point at which investors actually lose money) immediately post investment, according to basic math, is $0.
I think it's an important differentiation.
The underlying dynamics of the two companies are quite different, and I'm not sure their "worth" can be directly compared.
One is worth a lot to the granny taking out cash from her pension fund, the other is worth a lot to the advertisers wanting to cash in on the cloud social services.
It's another league. That could cause a x10 times Lehman. Too big to fail. If DB falls, the rest of the world follows suit...
Another pearl from the article:
> If the rot isn't stopped soon, Europe will have found a novel solution to the too-big-to-fail problem -- by allowing its banks to shrink until they're too small to be fit for purpose.
Really? and what about the derivatives exposure?
Stop doing that.
Snapchat valuation is nonsense.
Comparing a Banks market cap to Snapchat? You can't take this seriously. I thought more of Bloomberg.
. Make a list of companies that I would apply to and sort them from most interesting to no-way-in-hell-i-am-working-here order
. spend a weak reviewing typical algo/data structure questions
. For the companies that I absolutely want to work for, I review every single glassdoor review and write down the interview questions. Remember, most companies have question banks and most interviewers have favorite questions which results in same questions being asked over an over again. You want to exploit that
. Then to get over my interviewing jitters, I interview at a few companies where I would absolutely not work at. This results in no pressure interview practise and you can literally laugh at their asinine interview questions and walk out
. Finally, for the companies i actually want to work at, I try my best to get rid of phone screen. This is usually accomplished by dazzling them with my decent size github profile, contributing some fixes to their OSS project or finding someone who already works there that is in my alumni network
. Then when you finally arrive for the interview, you have real world interview practise, they are already impressed with your github profile/references and biased toward you versus some random joe off the street and you have made sure you have a pretty high probability of getting a question that you have already seen or is similar to a question you already know.
This technique has helped me get Jobs at top 5 employers in the valley along with a few startups. The reason I am posting this here is to demonstrate how broken, unfair and easy to game this whole process is
I usually take a few days to brush up on algorithms and structures for the first one I do in a batch, and have some canned answers for the personal questions, but otherwise go in with what I know. Some of that's experience now, but I don't remember any point in my career where I'd have done something this extensive. I hope the poster doesn't feel they need a month's prep every time they want to go test the waters on the job market!
I do agree with the frequent recommendations for Cracking the Coding Interview. As a lead who interviews frequently, my biggest tip is to be honest about what you do and don't know--take what you do know right to the limit then talk about how you'd figure out the rest given normal professional time and resources.
I'm not usually grading someone on whether they can solve my specific problem so much as whether I think they're someone I can work with while they do it. That said, if it's on your resume you'd better be able to talk intelligently about it to whatever level makes sense for your experience. I definitely probe around that stuff to figure out if I can trust the rest.
I think that this whole interview business is bad for both the candidate and the company (that would hire an interview cruncher instead of somebody who can produce work). Of course it is good for everybody else that promotes this kind of thinking (hr people, interview books, interview coaches, recruiters etc).
The whole process as described in this article is offensive to me. I don't want to prove myself by answering trick questions to people whose only skill is asking them! It's too bad we have dropped to that point.
(Please forgive my English - not my native language)
The process Amazon and many other tech companies use is fucking terrible. Algorithm tests and surprise CS 101 questions do not identify good employees. They're biased towards recent grads, do not address most real world situations, can be gamed through studying, and do not identify people who can actually think. You should not be testing for people who interview well, you should be testing for people who will make good employees.
You need to test real world scenarios. If they're going to be re-implementing bubble sort and doing binary math then fine, use HackerRank. Otherwise you need to have candidates work on a project based on what they'll actually be doing. Will they be working on APIs? Have them build or integrate with an API. Mapping in an iOS app? Have them do that.
Do not drop big surprises on candidates. Respect them and they will respect you. Tell them what to expect up front. The first email we send includes an outline of our entire hiring process with a list of each step. I've lost track of how many people have thanked me for this. Going through an interview blind is extremely stressful and increases the likelihood of losing good candidates.
My goal is to set people up for success. If their skills match what we're looking for I want them to succeed. I don't want them to fail because they are bad negotiators, didn't have time to study CS questions, or don't fit the typical stereotypes of a programmer.
I can't help but to wonder if it was really worth it. Surely, his prep helped him get the job, but the entire prep stage he describes seems like a very high up-front cost.
I've always felt that if I can't get through an interview with a little prep and my existing skills, I don't belong.
You should know exactly why you want to work for Amazon.
It would be interesting to know more about his desire to work for Amazon that badly.
I worked as an independent software developer while doing my undergraduate in CS and was very successful just by using fundamentals I learned in courses and then reading and supplementing new things. I then transitioned into a PhD student role in robotics, where I regularly need to come up to speed on topics from all fields of engineering and implement solutions. I love learning and solving problems, but the types of coding interviews discussed seem so far from evaluating that. I produce reliable and novel things I am proud of that take a lot of work, but that would not be enough to pass these interviews.
I know employers need proof of a good fit and competence, but I just dread the day I have to go through one of these things.
Startups and incubators like YC are also competing for the same talent. They are defining a new fit function to identify people who can find problems in their surroundings and want to solve them with the tools available to them. The startup founders are also writing blogs about how they got funded, my series A looks better than your series B etc. They are unable to leverage social engineering as much as these enterprise crowd. Many of the graduates still come out of universities and aim to get a job in a respectable company than I will do a startup mostly because the startup based social engineering is weak.
Didn't call on the agreed date for phone interview. Then when visiting on-site, future manager, probably the main person I should have been talking to, wasn't there. Quickly assembled together an interview schedule in 20 min with people who , except maybe two, seemed distracted and just wanted to go do something else. Forgot about me during lunch. So was left sitting in an office for an hour without anyone checking on me (eventually got up and started wondering the hallways hoping someone would stop me and ask me -- "Hey I don't think you belong here" and I was hoping to reply with a silly joke about trying to steal AWS's root CA private keys). Then of course they promised to call back with a decision in 2 days, which was more like 2 weeks. Didn't get an offer, no surprise (I did get snarky about the "leadership principles", they probably didn't appreciate that), but it did make a good story I like to tell every time interviews come up.
Knowledge of algorithms and datastructures is useful, perhaps essential in some fields but definitely not all - or even most. Even then just understanding the tradeoffs is probably enough.
Unless I'm interviewing someone for something like a database product I am not going to drill them on LSM trees/B+ trees/Fractal trees.
I'm not going to bust a dudes balls over not knowing what a priority heap is if after I explain it he can have a decent conversation about where it might be useful.
These sorts of interviews are dumb and I have never accepted a job where I have been subjected to them and probably never will.
I found, that nothing else works.
Brilliant. They expect near-flawless execution (under tight time constraints) on these algorithm quizzes -- but can't manage to communicate up front which languages you'll be actually be allowed to code in.
I never get this. The first thing that should be understood by the combination of the resume and the phone screen is the candidate's preferred language.
No interview environment will ever simulate "the zone", and there is no way to truly test a programmer out of it.
In dating people trying to find right fit for look, in tech interviews - right answers for arbitrary tech problems.
Nor look not answers to these tech problems is not predictor of success, yet this is what everybody are still using...
Similar thing happened to me. I was never told or asked which programming language I would prefer. I click the link, filled some info and I landed on programming question with only two choices C or C++. I was like wait?! Though, I tried to solve the problem, but couldn't finish on time. I contacted the recruiter about this and I never heard anything back. All I got was email, Sorry you didn't qualify. I really liked the Hackerrank interface, but I was disappointed in lack of language options I was given.
i'm a total noob in my 3rd year CS undergrad.
I've just started prep to crack companies like Amazon.
Any pointers or tips to get started?
Somehow I managed to pass to the next level. I told them no thanks.
I could be wrong, but based on what you are saying here, I'm pretty sure you were being interviewed by a software engineer and not a recruiter.
No graph problems? Seems like a possible hole, I got graph problems in interviews for Amazon.
Edit: Nevermind, I realized this was your phone screen prep.
In fact you could easily argue that European law requires Twitter to disable this access. If I want to delete my online presence, surely being able to actually delete my tweets is as important as being able to delete things from the Google-cache.
If they don't allow using the API for that, use the browser directly.
- Foo tweets
- Bar takes a snapshot of Foo's tweet
- Foo deletes tweet
- Bar displays Foo's deleted tweet on own website
Twitter tells Bar to shut up.
(Twitter would, however, continue to store the deleted tweet. It wouldn't display it, though.)
To be clear I'm not promoting that somebody do this, just wondering why there may not be a viable alternative that does it this way.
Having multiple users send in the same tweet could count as additional validation that it was not edited.
Can the archive project or similar crawl twitter to save content?
What kind of checks and balances should social media networks have? Supone they be regulated?
But the worst part is that you can't even save a link with all these changes theyve made! I was getting errors 90% of the time (maybe even 98% but who's counting) so it was pointless to even try to save (I gave up after 4 days).
Whoever's managing Delicious is probably the most incompetent person in tech. Their communication with usersbase has been atrocious. They made only one or two blogposts and remained completely silent on social media (Twitter) during all these disastrous changes. People were unable to get their links and the company remained mum. Number of "F Delicious" posts on Twitter was very high few months back. Their site still doesn't work and you can't save links (just tried).
Anyway, I found Pinboard and have been happy since. RIP Delicious.
Someone asked me yesterday how it was I read so many crypto papers, after citing Bos and Costello in crypto dork Slack. I forgot to tell them my trick: I don't! I just follow citations and bookmark the hell out of things.
I mainly use Pinboard as a kitchen sink for articles and completely random stuff I stumble across. You can actually tell from my totally schizophrenic tag cloud.At some point I was using a custom made script that bookmarks Hacker News story I'd +. So now I have hundreds of "hackernews" tagged bookmarks which I NEVER access.
I rarely go to Pinboard to retrieve a bookmark, maybe 10 times a year. It's faster to Google and for the stuff I really need to go back to I have local bookmarks.
I'm also a heavy RSS consumer: for sites for which I like to be updated of new content, I use my self-hosted RSS reader, no need to use bookmarks.
I guess I will keep on storing away links, the service is cheap anyway.
Self-hosted alternatives usually require some kind of interactive website front-end being set up or have some jenky browser extension support. Pinboard has pretty solid browser support so I can add to it wherever I am.
I use hugo to generate a links page from pinboard's RSS on my website. Every time I build the site, it pulls the RSS feed and formats it all pretty like. It's not extremely interesting or anything, but here it is in case you're curious. Pinboard user pages aren't all that pretty, but it's really just a container for your data.
Especially Bookie looks good and would be a preferred candidate to transition to if you are still using the very old 'Sitebar', 'Scuttle' or the interesting 'Semantic Scuttle'.
The only problem now is that there doesn't seem to be a working add-on for Firefox on Android. Does anyone know of one?
I knew 100% this was a low maintainance business, but I admit reading "the first wave of subscription renewals came due" and "I did almost nothing on the site this year except keep it running" almost in the same phrase hit me like a punch in the stomach; envy for his business acumen/talent I suppose :)
Main complaint is that it's not from Maciej but the author seems committed to keeping it stable with improvements.
It's one of my everyday tools, and would pay a monthly fee if it ever gets to that.
A perfect example of "do one thing, and do it well".
Also, idlewords, as you are here: any plan to make search listing more items (50? 100?). It's a function I use a lot and sometimes this pagination makes it slower to find an old link.
Del.icio.us was very good, but at some point it went to shit.Maybe it was when it was acquired by Yahoo?I remember trying to contact support about the broken Firefox plugin. They didn't fix it for over 6months.
I signed up years ago when delicious decided to completely redesign their service for no apparent reason and broke backwards compatibility with all plugins and made the main site less usable.
I am very happy that none of that asshattery is happening at pinboard and the site remains the boring plain ugly link collector that it is.
I do find myself using pocket more and more however and would be curious if Maciej has any thoughts on that service. seems like so far the've done everything right. Although they are aggressive with new features which to me is a little worrisome because sooner or later some executive will try to be brave and fuck it up with useless redesign.
Pinboard has been the same for years and I feel little attachment to the product.
I am unsure what to switch to, but Pinterest looks like an interesing and innovative solution.
But you should really make a video how it works and what are the benefits, that would make it easier to really understand.
> But the bees do not consume their pollen fresh. Instead, they take it into the hive and pack the granules into empty comb cells, mixing them with nectar and digestive fluids and sealing the cell with a drop of honey. Once processed in this way, the pollen remains stable indefinitely. Beekeepers call this form of pollen perga or bee bread.
This implies it's their food.
Given the recent concerns with https://en.wikipedia.org/wiki/Colony_collapse_disorder, is this unusual delicacy worth the potential environmental damages? (I don't mean that rhetorically; It's an actual question.)
Anyone know why it works?
I'd prefer that Bee Bread became more mainstream so that it could be bought from local producers who'd respond to that demand. Bee bread from the Altai mountains in Siberia is double the price of the eastern European BB.
Ideally, BB from New Zealand would be best, but I couldn't find it. Eg Manuka.
Thanks for sharing though, i had missed it the last time around. :) :)
Close associates of the administration are major holders of the countrys bonds and...the government fears itd lose their much-needed support if the payments stopped coming in. Efforts to obtain comment from government press officials on this and other aspects of the story were unsuccessful.
Either that, or debt-holding institutions are willing to kick back a fraction of coupon payments to high ranking officials as an incentive to keep the payments flowing for as long as possible before a civil war or coup forces a restructuring.
Either way, servicing the debt is keeping government officials paid to the detriment of their starving countrymen.
"Caracas fears a default could open up claims to PdVSA assets, such as rigs, refineries and oil shipments. One target by creditors could be the companys Houston-based subsidiary, Citgo Petroleum Corp., which has three U.S. refineries that receive hundreds of thousands of barrels of Venezuelan oil a day."
Venezuela is essentially hedging their situation is temporary (however temporary, temporary is), and having a good economic standing in the future may depend heavily on foreign investments (particularly since Venezuela is a petroleum based economy).
Burning those bridges now could cripple Venezuela permanently.
The same question could be posed of any nation with impoverished citizens and foreign debts. We could even examine the USA, which has never defaulted, even during the height of the Great Depression.
Choosing to default on those debts and instead send temporary (and likely insignificant, once the bureaucracy runs it's course) aid to some citizens jeopardizes the entire nation. Simply put, the trade-off is not worth it.
I've seen arguments that say otherwise, but there is precedent for liquidating the holdings of a government which fails to pay.
The government, especially under Maduro, has done so much dumb stuff that I lean toward the stupidity explanation. The main reason there is no food / goods is they try to set the price hugely under market so no one can supply at that price. Plus a huge bunch of other similar issues.
Perhaps things in Venezuela aren't exactly how some newspapers tell?... Who knows? All I remember for the country is that people were super-nice and the nature was gorgeous.
Violent dictatorships are seldom rational.
This article is also written with the tacit acceptance of neoliberal ideology: The only rational choice for an impoverished nation is to submit to foreign capital and governmental institutions and sell the country for the chance to become another dependent market that will bear the brunt of the next global credit crisis and coercive trade deal.
I live in France, where we've just had terrorist attacks, and I'm afraid we'll meet the same pattern, 15 years later than USA:
- People want the police to do its job (securing the streets, which goes from checking car insurances to being detectives on terrorism),
- So they vote for more police,
- The police doesn't do much more than assaulting easy targets, picking up girls who come to lodge a charge (true story), and walk on cyclist lanes (not much traffic enforcement because it's unpopular and not much detective work because it's risky),
- So people vote even more right-wing,
- Police has more powers, but still doesn't do its job much, and assaults even more weak people,
- Then we get people who kill police (like in Dallas) or burn police cars (like in Paris) because they're abusing their power.
Already, President Hollande made the same talk and took the same path on 13th Nov 2015 as Bush on 11th Sept 2001, so I'm a little afraid there's a trend were.
Now what societal changes could happen that would disrupt a race to the bottom of police brutality, like in USA?
Officers clearly are fearing for their lives and view potential encounters with black males through a lense of negative intent, which is causing them to react more aggressively. Greater accountability and training will be critical but only if the culture of the police force changes. This could be very difficult as a lot of police officers may take these jobs exactly because they're attracted to the danger and violence (In Canada many of the bouncers I knew were on steroids only had high school educations and many wanted to be cops...).
Not for the faint of heart or those that can't deal with blood.
Deleting evidence in a situation like this should be in a special category all by itself.
Note that the couple's four year old daughter was also present.
There are many videos of this, including one where a cop at a gas station asked a black man to show his license and then proceeded to shoot him because he thought he had a gun.
On the other hand shooting police officers only reinforces this kind of training. It's an explosive situation of mistrust, emotion and idiocy.
Anger is such a useless emotion.
That the police used her phone to delete it via vanilla facebook app is 100% plausible, but what's far more implausible is whatever mechanism was used to restore it.
What are the options?
Facebook allows you to undelete a video an hour later? Not to my knowledge.
Is there another automated/normal way for a video to undelete an hour later, especially with a modified content setting?
Is there an option that means something other than "someone on facebook staff saw that it was deleted and explicitly restored it without instruction from the user"?
This is really getting out of hand.
Humanity has not dealt with bad information reaching this many of its lunatics at this kind of speed.
IMHO this is and should be the highest priority bug on the issue list.
Mark Zuckerberg and Larry Page have no idea what to do about it and by keeping quiet about it or being defensive about it isn't helping.
Just try talking about slowing the speed of unprocessed information reaching the mentally ill, the ignorant or misguided and you will be taken out like the communist party is running the show. I expect better from the smart people of silicon valley.
I expect them to work out a fix. No one else has the capability.
We need to pay our officers more and have fewer of them, If you start to pay the good ones a respectable salary you'll get more accountability.
We need to train our officers better, we should have an apprenticeship program that lasts at least five years where the apprentice officer rides around with a veteran UNARMED until he picks up every skill necessary to do a good job in real time, two years of community college and then a year as park police isn't cutting it.
We need to reduce conflict on the streets, Philando Castile knew that his taillight was out, the problem is that in an impoverished and racially oppressed culture these small fixes become tough to handle when you have other bills to pay. The officer was either going to ticket him or give him a warning, neither of which would have done any good long term, and so you had unnecessary conflict. We need to change the laws so that cops aren't allowed to harass people over minor things like taillights/inspection stickers/small amounts of marijuana/jaywalking you name it.
Black culture needs to change, stop selling CD's on the corner with a gun in your pocket and get a job that supports your kids, this "gangster" lifestyle is a result of rap music.
The media needs to fined by the federal government for disproportionately reporting on content that is intended to get ratings and thus adds to the chaos and race baiting. How many people were killed in Chicago last week? Can you name them? The media outlets need to be fined to take away the incentive to over report on sensational news. We need something along the lines of a "Fair Media Coverage Act" that will completely destroy the financial incentive of media outlets that over report on sensational news, this would hopefully have the dual effect of (over the long term) slowing down the mass shootings that appear to be happening every other month. These rioters/mass shooters/cop killers are doing it for the 24/7 CNN news reel and the people tune in because the chaos is interesting and exciting, like a war movie with real life ramifications. Destroy the incentive.
Just some ideas for real changes.
The majority of us will consider crossing the street to avoid an oncoming $category_of_person. Maybe it's black men, maybe it's police, maybe it's beggars, maybe it's missionaries, maybe it is visibly agitated men of any color, etc.
Maybe you feel like you've failed a bit every time it happens. Maybe your personality changes over time and at some point you taught yourself to abstain from such behavior. But, feelings are harder to change than behavior...
When I pass my own problematic $category_of_person on the street (on the same side, now!), I spend a few seconds with no other topic on my mind than that person.
It is because deep down, some part of me still sees that person as a threat, like a cliff or a fire or a bear.
This is terrible, I know. Look, I'm trying to explain racism. Gimmie a minute...
My mother taught me, before I was old enough to know better, to avoid some categories of people. To fear them, for my safety.
I can, do, and will continue to overcome those crappy cards I was dealt.
But don't put a badge on my chest and a gun on my hip and tell me to go talk to various categories of people in inherently heated situations and expect that evil that has been a part of me since before I can remember to never manifest itself in a statistically significant fashion. That's stupid. Police officer is not the job for me. Duh! See above!
What I'm getting at is that my own combination of upbringing and later enlightenment is not uncommon. (Said differently: it's not uncommon for a person to be less racist than their parents were, right?) And therefor some meaningful percentage of good cops who don't consider themselves racist are, in fact, racist in a statistically significant way. Stress = gunfire.
...So... can we be done resisting the Black Lives Matter meme? Please? Y'all look ignorant when you do that. :)
p.s. the fear when walking thing dissipates immediately if a conversation happens, etc. It's not that big of a deal, right? We'll all have a good laugh about it one day when I am caught off guard and mugged by a white girl... Anyway, I'm sorry. I try.
1. Some white people are afraid of black people. (the other way probably too)
2. Most people are very afraid of other people carrying guns.
3. Most people are afraid of some form of resistance when they challenge/approach somebody.
Add it up. It's likely that somewhere a white policeman approaches a black person who has, or could have a gun, and experiences fear.
Before this is read as an excuse for the policeman, which it is not:
Keep in mind that fear and anger, fight and flight are intertwined. Hatred can come from fear. Fear can come after hatred. One brain-areal is responsible for both emotions.
The only efficient response to defuse these situations is to eliminate the "very afraid" above and disarm the population.
Where I live, police are reasonable and populace is unarmed.
On the other hand US is on of a few modern republics that hasn't produced a tyranny yet. And perhaps 2nd amendment may have something to do with that. Along with rest of the constitution.
Did anyone really think websites weren't doing this? This is incredibly innocuous compared to other things.
I'd say I want to hack on a federated Reddit clone, but looking at the state of federated social networking, I already feel it'd be dead in the water.
Having to opt-out of tracking feels like another nail in the coffin.
That got turned off immediately.
Edit: I suppose it's a dumb question because the answers can only be one-sided.
Also, you apparently cannot (yet) delete  the data reddit already surreptitiously collected.
Even if it's overall an innocuous thing, I find it shady that an opt-out tracking system is not announced publicly to Reddit. Were they trying to hide it until someone found it? Seems it would have been smarter for them to control the message around this option than let their users do so.
Also don't like tracking in general for privacy reasons, but it's a minor concern next to performance.
You are just a damn website! I do want every website considering itself so important that it needs to subsume duties of the OS and present them with its own branding.