After all, most people on mobile spend their time inside apps, probably from some Google competitor like Facebook. Within these apps, they click on links, which increasingly load inside webviews; the framing app collects info on where people go, and uses this to sell targeted advertising. Facebook is a king in this space, and is now the second largest server of internet display ads, after Google.
Google's assault on Facebook's encroachment is twofold: drive people to Google's apps like the Google Now Launcher (now the default launcher on Android) or the Google app present in older versions of Android and available for iOS, and deploy the same content-framing techniques from their own search engine webpage on mobile user-agents, where the competition is most fierce, and they can also position it as legitimate UX improvement -- which, to their credit, is largely true, as bigpub content sites on mobile were usually usability nightmares and cesspits of ads.
I understand that the author and quite a few others are peeved at this behavior and that there's no way of turning it off. But it's really not in Google's best interest to even offer the option, because then many people will just turn it off, encouraged by articles like the author's own last year where he was caught off-guard and before he gained a more nuanced appreciation for what's really going on.
The bottom line is this: Google is inseparable from its ad-serving and adtech business -- it is after all how they make most of their money -- so if you are bothered by their attempts to safeguard their income stream from competitors who have a much easier time curating their own walled garden, you should cease using Google Search on Mobile. There are other alternatives, who may not be as thorough at search, but that's the cost of the tradeoff.
Here's something simpler from a non-developer, average-consumer point of view. I recently began taking BART to work daily (new job). For those who don't know, BART is Bay Area's subway system, and (at least on the east bay side) cell reception is notoriously spotty.
When I'm on the train, which includes 2 hours of my day everyday (unfortunately), I'd be browsing on say Facebook, and look at links that my friends post. Instant articles almost always load successfully (and quickly) and external links to actual sites almost always fails to load or loads insanely slowly.
Yes, when you're at home or in the city with good mobile reception, these things make no sense and you'd rather hit the original site directly. Give them their ad revenue, etc. to support them, right. But for the average consumers who actually have problems like slow internet (like the average joe who rides public transportation and wants to read on their phone), things like AMP and Instant Articles actually help. I can only imagine outside of silicon valley (where I live), how much more significant of a problem slow internet/slow mobile data actually is.
P.S. I don't work at Google or Facebook, and I know this sounds like propaganda, not to mention this is exactly what they would like to tell you as the "selling points" of these features, in order to continue building their walled garden empires. Fully aware of it, but I did want to bring up why they exist and why I even actually like them.
1. Obscures the web page's URL.
2. Makes manual zoom in/out impossible.
3. Sometimes hides content mentioned in the article, with no ability to scroll horizontally to see it.
4. Confuses Chrome on Amdroid into over-hiding its top address/menu bar (forcing two swipes down all the way to the top to show) or forces it to show (won't hide on scroll down).
This is just coming from a user's perspective, fortunately it doesn't impact my work, but may in future websites I build due to it being almost 100% of the news articles I read.
I dont know why I do it, but for some reason it just doesnt feel right to me to consume the content through the AMP. It feels slightly off, and I want the real deal even if it takes a few seconds extra to load."
I have subconsciously been doing the exact same thing for a while now, and I think this quote covers a good deal of public sentiment. It's weird to use AMP, yet slower without it.
Another main issue I have with AMP is that there is no speedy way to check the url, something I do quite frequently. Instead it's just Google's hosting for the site, with the source being only available by clicking on the link icon.
At the risk of sounding like an old fart (I probably do), I fail to understand this frustration of normal mobile users with the so-called slowness of their mobile experience. To quote CK Lewis: "Give it a second! Its going to space! Can you give it a second to get back from space!??"
The speed difference on SERPs is the background downloading and (possibly) pre rendering of AMP pages. This functionality could easily be added to browsers, keeping people on their own websites and Google not having control over the content.
We already have <link rel="preload/prefetch"> but how about adding <link rel="prerender" href="http://amp.newswebsite.com/article/etc." />.
This would absolutely give all of the benefits of AMP Cache without Google embracing and extending the web. It's also much simpler to integrate, every single site can choose to benefit from this (not just SERPs) and I don't end up accidentally sending AMP Cache urls to my friends on mobile.
The AMP saga has pretty clearly shown that users care about content while Web developers only care about URLs and what goes over the wire. This is a huge disconnect. It doesn't help that many Web developers show no empathy for the users' viewpoint.
Ultimately it probably is easier for Google to add an opt-out to appease a very small, very vocal minority than to educate them that the URL doesn't matter.
There are extensions like No Script that can give similar results for other browsers. https://noscript.net/
Marketing has taken the lead in corporate websites projects to the detriment of the end-users, AMP puts the user in the center.
Although there is much to be concerned about Google's ever-expanding reach into the daily life of a good portion of the planet, I think web proponents have more to fear from the likes of FB, Apple, and others appearing on the horizon. These companies are mostly succeeding at meeting current UX expectations (performance, standardization, ease-of-use), and in doing so they are capturing eyeballs away from the web. It's possible some of those who have left for these walled gardens may not return.
google amp pages google amp annoying google amp sucks google amp conference
test cache disable maps
This simple test is therefore inconclusive, but my hypothesis is that his search autocomplete hints are, ironically, colored by his search history. The only negative word I got (disabled) is much more neutral.
Now that I think about it, duckduckgo's "no tracking" isn't just valuable for privacy. It's also valuable for consistent search results across computers without yielding even more information (logging in etc). A few times I made a query and found something useful and surprising, and then I wasn't able to replicate the query on another computer to show someone else. In any case I'd hate to miss a rare interesting page because Google thought that extra 10 pages about Linux might interest me more.
The agency I worked at it was a huge problem because back then clients and business people still used AOL and would see the jacked up versions of their site. There was literally nothing you could do, they did it to small and large sites without abandon.
AMP reminds me a bit of that type of setup with AOL re-compressing and crunching down sites through their network. I agree with Google on doing this for email for security but not necessarily websites. AMP to me is quite annoying and in general a bad move.
Reminds me of this:http://blackhat.com/media/bh-usa-97/blackhat-eetimes.html
Currently with AMP Google gets not only your traffic but they get your content on their own domains (which makes all content look like the same trustworthiness) and, at the same time, they mark sites that have AMP available in their search results thusly weighting those results differently because it can train users to click on those more.
Ultimately this is bad for everyone but Google.
However, if it was a framework / set of tools we could create our own AMP pages and simply put them on our own DNS. Google's cache is really the only unique thing going on here and we wouldn't have to worry about sharing trust.
As a developer I'm not a fan. It's another thing to manage and maintain. And the last time I checked once you can't leave without some serious consequences.
As a marketer I like the increased CTR but dislike the higher bounce rate and limited features.
// ==UserScript==// @name Un-AMP// @namespace http://tampermonkey.net/// @version 0.1// @description avoids google AMP links and navigates to the original content// @author Alenros// @match https://www.google.co.il/amp/*// @match https://www.google.com/amp/*// @grant none// ==/UserScript==
HN: But the open Internet!
Users: What's that?
HN: Normal websites!
Users: Like...the really slow ones? With all the annoying popovers? And pages that take forever to load? And for some reason cause my fancy new phone to slow to a crawl?
HN: Well, those websites should rewrite their entire codebase to be faster.
Users: That doesn't help me, though.
HN: Trust in the free market! The problem is you, the user, who just needs to exert more pressure on website purveyors so they'll make performant web sites.
Users: You mean, like, preferring websites that offer faster experiences? Okay. Continues to use AMP.
If they solved the URL issue somehow(even if faking the address bar), and had original and AMP links in search; it would probably reduce the antiAMP argument quite a bit. Which both seem to be just UI issues.
It's the first time I've found an alternative to google.com that is actually usable (i.e. I find what I'm looking for near the top of the first results page every time I make a search).
You can use Google as one of the results providers, but you won't see any AMP results, and since searx can mix in results from Stack Overflow etc, you might find that a different search engine than Google still gets you good results.
I think Google would pull fewer of these monopolistic tricks if people would realise they have genuine alternatives.
What did happen though, is that i found google results a lot worse on mobile, and ended up not searching for stuff on my mobile. Google results really look like a mess on mobile now...
They really went from minimalist zen to baroque indian arabesque over the year...
I don't think I've ever seen an AMP-enabled website, I certainly never noticed any buttons suggesting I visit the original website.
I'd suggest trying an alternative, maybe https://duckduckgo.com.
Is it an american thing, not enabled for other countries? Just what am i supposed to look for?
Do you only see them when doing a Google search?
But given the URL format, it should be trivial for a browser extensions to rewrite links or requests from AMP pages to the original. I bet it already exists.
The ticket was closed a few days ago. People dislike stuff like AMP, but we are probably stuck with it, there just isn't much interest in alternatives.
From google news, the top hits are served through amp and I lose about 1/10 of my screen area to a pointless blue "bar" underneath safari's address bar. This loss of screen space is the only reason I object to amp.
Much faster everywhere, in all browsers and platforms.
to our HTTP requests Google and publishers will start noticing we care.
I sometimes think it would've been better if a few things had visibly failed in January 2000.
- Erik, CEO, Chess.com
Snarky... Except that there were probably years of games to notice that you were approaching a "magic number" like 2^31.
That said, this is definitely indicative of what's going to happen in just 20 years, 6 months and 20 days from now. I mean, we're still cranking out 32bit CPUs in the billions, running more and more devices, and devs still aren't thinking beyond a few years out. I know of code that I wrote 12 years ago still happily cranking away in production, and there may be some I wrote even longer than that out there... and I guarantee I hadn't given two thoughts about the year 2038 problem back then, and I doubt many devs are giving it much thought today.
It's truly going to be chaos.
That was a valuable lesson.
(I actually generated most entries myself while testing stuff - live in prod of course - and while there were probably fewer than 255 votes, the AUTO_INCREMENT did its job and produced an overflow).
Twitter saw it coming and forced the issue. By saying that at a certain date and time they would manually jump the ID numbers rather than wait for it to happen at some unpredictable time.
Chess.com is a great site, also lichess.org and chessable.com if you like chess you should check them.
Thanks for all the comments! Always lots to learn from.
You probably mean 2^31 -1.
Didn't expect Chess.com and YouTube to have a crossover of users? Surprised there isn't active moderation on a site this size.
IMPOSSIBLE to predict.
Even though all of these gains, plus more as core yahoo lost value was from Alibaba this does look impressive at a first glance.....
> We oversaw the creation $43B in market capitalization and shareholder value. Our market cap has gone from $18B to $51B (increasing our valuation by $33B), while we returned nearly $10B in cash to shareholders.
Sadly the list of employee gains seems very spartan compared to the shareholder gains.
For those of you wondering what the Yahoo/Altaba shell contains now...
- approximately 15 percent equity stake in Chinas Alibaba Group Holding Ltd.,
- about 36 percent in Yahoo Japan Corp.,
- cash and marketable debt securities,
- certain minority investments and Excalibur IP, which owns some patent assets.
Subsequent reporting has hardend my opinion: https://www.nytimes.com/2014/12/21/magazine/what-happened-wh...
I have tried to examine what role gender plays in my visceral dislike for Marissa Mayer. I hope it is a small one. I give myself some consolation that I recoil almost equally when reading any news coverage of Travis Kalanick.
Take away Marissa Mayer from this story, and replace her with a generic CEO, and I'm not sure we'd see the same mood in either comment section.
Why is this? Is this because she's from Google? Because she's a former engineer? Because she's a female CEO? Is she just a politically polarizing topic ala Elon Musk?
Genuinely curious. Anyone have any ideas?
Tumblr has a massive audience, but some of the worst tech among the social media, and now it seems like it might get abandoned completely. So people will eventually migrate to something else, right?
What can other platforms, like Medium, do about this? If you had a platofrm that might be valuable for a similar usecase(though, hopefully, much better), what would you be doing right now? Any ideas or advice?
I've been on Flickr for a long time now and it works well for me, should I be worried?
I dream of a day when the Engineers who make the Tech Company what it is, are also offered 'Golden Parachutes' as part of a Job Offer.
In all seriousness, Yahoo! has done an amazing job with their fantasy sports.
The problem with consolidations like this into bigger and bigger conglomerates is that it reduces editorial independence in favor of a false sense of corporate unification among all the "verticals". The heavy and overweight company has a "great" vision which involves being everything to everybody. But that never works. End result will likely end up providing a lukewarm mediocrity in them all.
What Yahoo probably should have done was divest; instead it allowed itself to be swallowed whole by an ISP whose sole goal (as evidenced with its malfeasance to destroy Net Neutrality) is be able to selectively prioritize traffic in the ways that are most profitable to them... Ergo, the objective of this kind of empire is not to track down the truth and inform people about what is really going on, but to entertain and distract.
Perhaps Verizon can do something useful with the brand, but the Yahoo I knew is dead and probably has been since Mayer took over. She was brought in as a hatchet-woman to get an acquisition and got the job done.
English : this was going to happen anyway
Since then she's given a lot of cash to shareholders, raised the stock price, and is selling the "negative value" core business for $4.5 billion.
That's an astounding success.
Its insane how much of it goes to my direct mailbox right in front of my eyes! Some even have "viagra" word in subjects, they come from weird addresses like hJGabtmDwbaiaJUsgUNiepwwUzDUUdanBHFpiMEghzLKNsotQTbrhZdpDzCHFWatqQB@perico.hunmooth.com and open up with images and everything ready for my click.
I suspect Verizon is already working hard on break the remaining thing that worked fine until now - yahoo mailbox.
But I'm fine with that. I had it in my pipeline to move out of them for so long now another incentive to actually do so :)
Someone does a Go version and gets the same speed as GNU yes. Someone else tries several languages. This person got the same speed in luajit, and faster in m4 and php. Ruby and perl about 10% slower, python2 about 10% slower still, and python3 about half that. The code is given for all of these, and subsequent comments improved python3 about 50% from his results, but still not up to python2.
yes | rm -r large_directory yes | fsck /dev/foo
I don't understand this reasoning. Why is it being limited to main memory speed? Surely the yes program, the fragments of the OS being used, and the program reading the data, all fit within the L2 cache?
The top comment is:
"It's a shame they didn't finish their kernel, but at least they got yes working at 10GiB/s."
which as an OS guy, someone who has been working on Unix for 30+ years, as a guy who was friends with one the QNX kernel guys (they had perhaps the only widely used microkernel that actually delivered), that's hugely amusing and spot on. The GNU guys never really stepped up to being kernel people. Bitch at me all you want, they didn't get there. It's a funny comment, especially coming from reddit.
It's about twice as fast as GNU yes now on my FreeBSD system here.
Not complaining, I like this kind of analysis
But it seems you won't be limited, in a shell script, by the speed you can push y's
# /usr/local/bin/yes | pv > /dev/null 11.5MiB 0:00:09 [1.02MiB/s] [ <=>] # /usr/bin/yes | pv > /dev/null 1.07GiB 0:00:09 [ 142MiB/s] [ <=>]
Make the static array BUFSIZ * 1024 to trim the syscalls by a factor of 1000.
So, what's the distribution of #bytes read for runs of 'yes'? If we know that, is GNU 'yes' really faster than the simpler BSD versions?
Also, assuming this exercise still is somewhat worhtwhile, could startup time be decreased by creating a static buffer with a few thousand copies of "y\n"? What effect does that have on the size of the binary? I suspect it wouldn't get up much given that you can lose dynamic linking information (that may mean having to make a direct syscall, too).
`yes` will help me on the "see what happens when something uses all the CPU and memory" test case. Thanks Reddit/HN!
I would write() the buffer each time it gets enlarged, in order to improve startup speed.
Also: The reddit program has a bug if the size of the buffer is not a multiple of the input text size.
And it's increasing the buffer by incrementing one at a time, instead of copying the buffer to itself, reducing the number of loops needed (at cost of slightly more complicated math).
After reading here so many unfair critics and pedantic dislike over PHP, I just want to say: STFU.
... Just to name a few.
Why did the GNU developers go to such lengths to optimize the yes program? It's a tiny, simple shell utility that is mostly used for allowing developers to lazily "y" there way through confirm prompts thrown out by other shell scripts.
is this a case of optimization "horniness" (for lack of a better word) taken to its most absurd extreme, or is there some use case where making the yes program very fast is actually important?
Can we have a new flag for posts by people who don't know what they're doing so I can skip them? I am serious.
 - https://www.samba.org/ftp/tridge/misc/french_cafe.txt
I think they're going to be keeping the 2016 version up for a while longer. They generally start a new one in September each year.
It's not like this article teaches much about the general "reversing mindset" (similar to the "hacker mindset", but not quite exactly the same), or the "methodology" as promised in the title. Because yes there is some very interesting overlap in skill within the broad field of RE. Ask any pentester who also picks locks.
Not to discredit the article itself, btw, which is fine given what it actually covers. Which is about Linux binaries, and in particular with the object of solving a crackme puzzle.
Maybe "Reverse engineering a crackme for beginners" would be a bit more descriptive.
There was one thing in particular where I knew there was a jump somewhere (if some_length < some_width) that caused bad outputs. I was playing around looking at registers etc in gdb while following along with a disassembled version of the code, but it was impossible to get any idea where to start.
I wanted something that could give me a few seconds worth of samples of where the instruction register was spending its time as a starting point, but couldn't find any such tool (linux).
Within my control:
- giving input files to explicitly set unique numbers to watch out for - giving inputs that would generate bad output numbers only in the bad code path - giving inputs to force a load of jumps down the bad or good code paths
I honestly wish CMU would release the lectures and full class materials for 15-213 (the course most typically associated with the bomb lab mentioned here). The lectures combined with the accompanying text and labs form a masterpiece, and it's a shame the community at large can't take better advantage of it. It's like SICP for systems : that effing good.
The tests, however, are just awful. Those can safely be dumpstered.
I remember MadWizard's assembly tutorial being very helpful at the time.
My best challenge was Brazil (3ds render engine). It had all types of checks that would only show up when rendering.. But that was no match.. Good times
TLDR; For legacy reasons, some words produce valid colors even if they don't respect the standard color formats. For example, "chucknorris" produces red.
With that said there are some pretty cool ones (e.g. 5afe57 = safest = a green) that do match up. Can't say I can think of many hugely practical uses for this, but it's kinda neat!
is the colour of asafoetida: https://www.google.co.uk/search?q=asafoetida&source=lnms&tbm...
And #C0C0A5 is cocoa.
For fitness studio folks who are into hex (of which there are obviously billions) #F17 (bright pink) would be popular too.
Array.from(document.querySelectorAll(".wrap > div")) .filter(n => n.getAttribute("name").includes("t")) .forEach(n => n.parentNode.removeChild(n));
Bonus points: named color support for valid CSS colors, such as dodgerblue.
One oddity: for some reason, the site's CSS makes text selection highlights invisible. If you select text, the selection looks identical to unselected text, though copy/paste still works.
Also, the color boxes appear to be editable text areas: if you click on one, you can backspace or Ctrl-U and the text of the color vanishes, until you hover/unhover it again and the text gets reset (because of the 1337/LEET translation going on with hover/unhover).
aspell -d en dump master | aspell -l en expand|grep -e
I didn't think of the other possibilities(like #bada55), but instead opted to shorten it to 3 letter codes. The one I like most is #b00, a nice red.
> not found
> closes tab
With the web, the convention right now is to treat the subdomain as a different security origin (with the exception of www). So the link should show c0ffee.surge.sh, not surge.sh.
If this is a manual setting, it probably also needs to be set for neocities.org. I noticed that wordpress.com domains were being subdomained properly.
It really shouldn't be manual, it should just always show the correct origin domain.
1. I believe it began with the hacker getting DOB/SSN.2. Called wireless provider, and hacker forward all calls and texts to a burn phone. Eventually, the hacker ported my wireless phone to another provider/number (not sure which), and the phone registered to my provider did not work anymore. The landline phone was also forwarding calls to another number.*3. Hacker gained access to email (as that email was also within the telco's site). At the beginning, the hacker did not reset the password. After I changed the email's password, hacker was still gaining access to our emails and he/she eventually reset the email blocking my access. (reason was all the text and calls was forwarding to his/her burn phone so he/she can reset the pass anytime)5. Requested 2FA from bank.6. Gained access to bank account.
This was over a course of 3 months. It was a nightmare to resolve and paranoia still remained. The hacker later on went opening several bank accounts. Fortunately, this was discovered early. The entire situation was communicated to the FBI, local police, and bank institutions, but I do not think anyone cared.
*I saw two numbers that were being used within my wireless account site to forward the calls.
After leaving the party with my youngest, I went to the grocery store, and then on home. When I got home my wife was gone, which I expected since she was picking up the older kids from the party.
Throughout this afternoon I had not been checking my phone in an attempt to be a bit less connected on the weekends.
About half an hour later my wife comes home totally freaked out and frazzled.
Apparently after I had left, someone went into a T-Mobile store and somehow convinced the associate that my number was theirs. I had received a couple of texts from T-Mobile with a pin number where the store associate had attempted to do something, but I was not aware of them until later.
Once this person had my number, they called my bank, reset my online password, and transferred all of our money from various accounts into one of my checking accounts. The bank then put a hold on everything (thank god).
My wife happened to have been paying bills online while this was happening, and saw it all go down. Her first thought was to call me, then when I didn't answer to call the mom throwing the birthday party.
Birthday party mom told my wife I had left, so my wife assumed that myself and our 3 year old were being mugged or something. The police were involved and she spent a good amount of time freaking out trying to find me.
All in all I had a pretty good afternoon :P
For real tho, it was a freaking mess. Took weeks to get our accounts safe, and we try to avoid using phone numbers for 2fa now.
1. Do NOT secure your sensitive accounts (facebook, primary email, bank accounts, twitter, etc) with your telco phone #. Telco Phone number is NOT secure!
"Create a brand new Gmail email account. Do not connect it to any of your existing email accounts. (When signing up for a new Gmail, you dont need to enter a phone number or current email, although there are fields for you to do so. Leave them blank.) Once youve created the new island-unto-itself email address, create a new Google Voice number." Use this Google Voice # to secure your primary accounts, and don't have your telco # listed in any of those accounts.
But, make sure your New Gmail account is super secure, with a security key, as mentioned in the article.
2. Check the password recovery methods for all your sensitive accounts and make sure the answers aren't duplicated from any other site. Actually, it's best to remove them, if you can.
If any security experts want to chime in, please do.
My current two banks don't have direct 2FA enabled. As far as I remember, the questions available to one of my banks (credit union) are simple enough that you could probably find out by doing a public info search somewhere, and the other bank (Chase) has SMS 2fa, but outside of that it's just public database questions (I know this because I had my card number stolen recently, I currently don't have access to my phone as I'm out of the country, and they asked me a few different questions from a public database, like if I had ever lived at ABC Dr., do you know this person, and what is the full name, etc.). I'd much rather be able to give the banks some kind of information that they are required to verify before they can access my account, like a verbal passphrase, but I don't think that's possible (as in, I wouldn't be able to access my account over the phone without the passphrase).
As I required to upgrade my Micro SIM to a Nano SIM, I went to one of my provider's shops and asked for a Nano SIM for phone number X. I was then asked to verbally confirm my name and address and that's it. No ID card confirmation, no nothing. "Here you go sir, your new SIM card will be active within a few minutes. Can I help you with anything else?". What. the.
I also find it odd Facebook, and other sites will let you signup solely with a phone number. There's prepaid cell phone providers that recycle phone numbers, etc. Just seems so stupid to rely on a phone number for authentication alone, but two factor I'm okay with since you still need to know the password. Twitter has a developer product where you can be texted a code to login using only a phone number, which to me just seems wrong to do.
It'd be nice if trying to port a number, change important info, etc if they had to actually call you or text you first to confirm. But one of the problems is people will lose their phones, and need a new sim or phone... That I think I'd have a requirement to actually visit the store - but that doesn't work to well with prepaid phone providers without physical stores selling via other stores like Walmart, Target, etc. Maybe in that case without nearby stores, partner with your retailers to verify ID or fax a ID in.
Conversation with one of my banks the other day:
Them: Can we please verify a code sent to your phone number?
Me: Umm, sure, although that won't verify anything. Use something else to verify that it's me.
Them: Can you please verify your phone number?
Me: Umm, I don't know what phone number I used with you? Try XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, and XXX-XXX-XXXX? They all belong to me depending on where I am.
Them: Can we use XXX-XXX-XXXX? Do you have this phone with you right now so we can we send a text message with a verification code?
Me: Send your insecure SMS to any of my numbers. They all go to my e-mail inbox. [I don't need to have my "phone" with me -- my "phones" are virtual.]
I have had two phones die on me that was my 2FA device, plus OS upgrades, so I have gone through resetting 10-20 2FA accounts a few times. Though with upgrades usually I foresaw that and downgraded my 2FA before hand.
All I wish for was that resetting 2FA would be a very very slow step by step process and spammingly broadcasted to all emails, sms, postal etc associated with the account. But I know for cost cutting customer services departments that wont happen.
The problem is that the phone company owns your phone number and you just get access as part of a service. Unlike a domain name where you own it.
If we change the law we'd bring more accountability.
Yes, it's a problem that security questions turn hacking into a simple public records search.
BUT most terms of service have a line like 'you warrant that you've been entirely truthful with us' or something. If you give the wrong security question to your bank, they potentially have grounds to freeze your money or screw you later.
Why isn't the answer 'consumers have the power -- punish services that don't support FIDO by not using them'.
At best this article is saying 'don't connect anything to anything'.
The best way he came up with to secure services that insist on using SMS for 2FA (or credential reset) was to register the number of a pre-paid phone for those services.
Inconvenient? YES. But a pre-paid phone number can not be ported by a negligent (or willfully criminal!) operator.
I have enabled proper 2FA on my Google account with U2F, but I haven't disabled everything else yet because I only have one token, and I still need something like TOTP for stuff that uses Google accounts, but doesn't support U2F.
As a closely related remark, I wish U2F would just get popular enough, it's pretty convenient, isn't vulnerable against the kind of attack SMS-based 2FA is, and protects against phishing. But almost nobody outside Google supports it, and OS/Application support is rather incomplete or requires additional setup.
Even given that, since it relies upon human choice and behavior, and does nothing versus attackers with assets within the phone company, it seems a bad idea to have 2FA via SMS.
Seems like some combination of the following:
* using Google Voice for all account recovery situations that require a phone number
* Calling your cell phone provider to have a note that states do not allow for number porting
* Use hardware 2fa tokens. Have two setup, one as a backup in case you lose one.
* Keep a copy of your recovery codes somewhere accessible
* Probably have a safety deposit box with your backup 2fa token and recovery codes stored.
* Primary email provider should use a hardware token and not have sms recovery
* Use unique passwords everywhere and use a password manager
They don't have any offices open to the public, nor any hotline, and are really the cheapest alternative where I live, but it seems that their attempts to save money have resulted in them ending up with a securer infrastructure than some notorious ones from very advanced countries.
1) Ban SMS as a second factor for high risk targets like banks.
2) Telecom companies should require social security number or uniquely identifying information to provide account access.
Users aren't warned enough about the fact that everything fails, and they will have to go through 2FA deactivation/account recovery process sooner or later. They must be really reminded to DO BACK UP the recovery code(s). With "back up" as in "keep not just somewhere, but where you can actually find it, when you'll need it". (But not in your password manager)
This is true for SMS 2FA as well, but completely losing the number (as long as one's a paying customer) must be significantly less common than losing a device.
She just replied well we could change the sim to your name, didn't even check with the original owner and 5 minutes later I was on my way with new sim.
On the phone with them, they said the card had been flagged as being used in fraud because we were off in the middle of nowhere, away from our normal spending patterns. The ONLY way to reactivate the card is for the CC company to SMS text us with a code, which we have to read back to them. The thing is, the very reason they flagged us - that we were way off in the middle of nowhere - also meant that we had no cell phone service, and couldn't receive the SMS. And given the vast size of Big Bend (getting out of the park from the hotel is a 45 minute drive), it was questionable if I'd be able to drive to a location with cell service if I couldn't fill my gas tank first.
The hotel manager overheard me arguing on the payphone with the credit card company, and he drew me a map of some pockets of cell service within the park, so in the end I was able to get it taken care of.
One ironic part of this was that the card is in my wife's name. When they wouldn't listen to her, she gave them verbal authorization to talk to me in her stead. They were willing to believe her identity for this, but not for the re-activation of the card, which doesn't make sense.
I also asked their CSR why they flagged the card. They said that I should always notify them if I'm going away. I asked them what the criteria is for that, since this was an in-state trip (I live in Austin, and Big Bend is also in Texas). The CSR said that's odd, and he doesn't know why that would happen.
So good for them that they watch for fraud, but the failure mode for their heuristic is the most catastrophic possible. If the very reason they flag me also prevents me from fixing the problem, then it's a rather badly-designed system.
There was no authentication at all. Literally anyone could have walked in gave my name and phone no and would have gained access to my phone. I stopped using my phone for 2FA since then.
In China your phone number is pretty much as valuable as all your password combined, all services are solely linked to it.
Even though phone companies ask for id before issuing a SIM card, I'm pretty sure a tiny bribe is enough to get past most store clerks
Addendum also several of my purchases were flagged as hacked purchases by them and I had to call them three times so far this year. All purchases from same Amazon account, same IP too. So I do not think they have a good services team.
If I use a 2FA app like the Google one and lose my phone, I need to have the codes ready. If I were to use my phone number, I kind of don't need that since I just get a new sim and a new phone. But at the same time that is not safe now.
So what is the solution here? I liked the idea of something like DUO but not enough places use it.
Could you convince a cell phone store rep that you are who you say you are without your drivers license?
Or, for a million bucks, could you make a cell phone store rep think you were someone else?
The answer is why SMS 2fa isn't such a great idea. Because your security checkpoint is owned by a (underpaid) store representative.
Why not just have all sites that require SMS 2FA (there are a lot, including tele co.s) be directed to a personal google voice number? And also remove the any SMS 2FA from this google and your personal? Wouldn't that solve the issue they are suggesting? Why do you need a third account?
Kraken published a highly useful blog post on it. Do give it a read.http://blog.kraken.com/post/153209105847/security-advisory-m...
I wonder what other scams are being incubated in lesser-known parts of the world, that are waiting to be unleashed.
Even so, hackers can still use SS7 to hijack phone numbers.
Get the 2nd factor
In a single-node database or even a manually-sharded one, this post's advice is good (For Friendfeed, we used a variation of the "Integers Internal, UUIDs External" strategy on sharded mysql: https://backchannel.org/blog/friendfeed-schemaless-mysql).
But in a distributed database like CockroachDB (Disclosure: I'm the co-founder and CTO of Cockroach Labs) or Google Cloud Spanner, it's usually better to get the random scattering of a UUID primary key, because that spreads the workload across all the nodes in the cluster. Sometimes query patterns benefit enough from an ordered PK to overcome this advantage, but usually it's better to use randomly-distributed PKs by default.
For CockroachDB, my general recommendation for schema design would be to use UUIDs as the primary keys of tables that make up the top level of an interleaved table hierarchy, and SERIAL keys for tables that are interleaved into another. (Google's recommendations for Spanner are similar: https://cloud.google.com/spanner/docs/schema-design#choosing...)
This is called a "candidate key" in existing literature. much has been written about such things.
Both UUID's and auto ID's are "surrogate keys" because they are arbitrary with respect to the data.
lastly, "natural keys" are combinations of columns that consist of the business data.
Why does your security rely on primary key obscurity? This seems like you're doing something horribly wrong, put some authentication on that or something.
And no, no they won't. Hitting a collision is very hard if you're using cryptographic strength random UUIDs, you wouldn't even be able to bruteforce 64 bits over the internet in a reasonable timeframe.
Go ahead, try the math on that, the only reason small keys are vulnerable to local attack is because you can perform an enormous number of attempts per second, often in thousands of millions of attempts per second and they can keep at it for as long as they want. The database server won't let you query anywhere near that fast. You will never get anything like that for network based attacks as you're limited by bandwidth, latency and of course, the other side who will notice if you even try to do this for any significant period of time and likely block your attempts or limit them greatly.
I'm tired of hearing "you don't have to say how to get the data, you have to tell the database what you want and it will get that in the most efficient manner" and then deal with an encyclopedia of byzantine rules to get it to do the aforementioned "efficient manner" with anything approaching decent performance. I can see the art, but the practicality mars it beyond recognition. It's like Venus de Milo sculpted out of duct-tape and bubble gum.
Sorry for the rant, I'm just getting frustrated with performance problems in small data sets. I've taken the courses, I've read Date and Darwen, and I'm just starting to get terribly disillusioned.
There's no substance to these claims. Chasing the links around we finallyfind this article:http://www.sqlskills.com/blogs/kimberly/guids-as-primary-key...which makes the reasonable argument that random primary keys can causeperformance robbing fragmentation on clustered indexes.
But Postgres doesn't _have_ clustered indexes, so that article doesn'tapply at all. The other authors appear to have missed this importantpoint.
One could make the argument that the index itself becomming fragmentedcould cause some performance degredation, but I've yet to see anyconvincing evidence that index fragmentation produces any measurableperformance issues (my own experiments have been inconclusive).
This shouldn't be right. UTF-8 encoding uses the same 8 bits for each valid UUID character that Latin-1 would. Unless someone put invalid characters in the UUID field, I would guess that the new encoding was actually UTF-16 or something.
you'll never say this out loud : 7383929. you may be able to remember it, maybe. in a uuid you'll match the last few and first few letters just as fast in your head
uuids are fine. sorting is an issue but at scale (the entire point of this article) how often do you need to sort your entire space of objects by primary key? you'll have another column to sort on
hiding primary keys and having 2 keys seems like a great way to make all queries and debugging 2x as complicated
The moment any db starts to grow to these areas, UUIDs lead to far less issues than incrementing ids everytime.
Most RDBMS now have optimizations and native types (uniqueid) for UUIDs/GUIDs and this is really a moot point at this point, most UUIDs are no longer strings in DBs unless legacy from the time before native UUID types.
UUIDs are right for most projects but not all and as typical in any system, the environment and needs of your project will dictate whether it makes sense to use them.
UUIDs eliminating the round trip and negating dealing with autonumbering/sequencing is a massive benefit, the only real con of UUIDs is the extra 8 bytes but make up for it in less need to lookup during runtime when creating new or associating data with them.
One of our Ops guys did an experiment where they put a uniqueness constraint on the ID column and added an auto-incrementing primary key column that's never exposed to the code driving the thing. It apparently sped up our DB performance by orders of magnitude.
It also turns out that MySQL would perform faster just by leaving those values as strings instead of converting them to binary values. We've got some outside pressure to use Oracle instead of MySQL, and apparently it performs much better than MySQL with our current schema so we apparently aren't going to do anything to improve the MySQL performance or change any of this behaviour.
Let me know if you want ports in any other languages - the the algorithm is to really just treat the UUID as a hexadecimal number (that's actually what it is) and re-encode it into any other alphabet of choice.
That said, always use native UUID types in datastores - they'll convert to bytes / numbers internally and will always be the most efficient. For other situations, remember that they're just numbers, so you can write them in binary, ternary, octal, decimal, hexadecimal, vowels, baseXX or really any other alphabet you want. The bigger your alphabet (as long encoding remains efficient, like ASCII under UTF-8), the better your gains will be.
Notice how the author assumes UUID v4 in the conversation. There are very few reasons to use the other versions but we are still paying for their price in code complexity all the time.
Look at this UUID parsing code: https://github.com/sporkmonger/uuidtools/blob/master/lib/uui...
What it really should be is `[uuid_string.gsub('-', '')].pack('H*')` (for non-rubyists: remove the dashes, decode the hex back to binary).
Their representation is also not that good since hex encoding is not very compact.
I guess what I'm trying to say is that UUIDs are often used as a default unique identifiers but they are actually not that good.
In what context would a primary key change, even when sharding? In my entire career I have yet to see it. Also any sane person would never sort random values. If you need sorting in your table, provide some kind of indexed timestamp.
On top of that you get IDs that are impractical to guess, which while wouldn't replace other security measures, would still give you some collision resistance and probably avoid some bugs because of the unlikeliness of accidentally picking the same key for two different entities.
I'm sure there are pathological cases for UUIDs as primary keys in certain scenarios, like perhaps a very high number of small records, but I've not come across them myself. You obviously have to know your own data and database if you have some very specific requirements.
Many UUID generators produce data that is particularly difficult to index, which can cause performance issues creating indexes. To address this, Datomic includes a semi-sequential UUID generator, Peer.squuid. Squuids are valid UUIDs, but unlike purely random UUIDs, they include both a random component and a time component.
One interesting thing we ran into when implementing is that C#'s binary format and string format must be different to be sequential. So we have to detect whether the GUID is stored as a string or binary and put the timestamp in the correct place to ensure it is actually sequential.
Here's the PR for the feature for anyone interested: https://github.com/PomeloFoundation/Pomelo.EntityFrameworkCo...
This may be practical from a storage standpoint but string-based indexes on an SSD are pretty damned efficient.
Why would you sort these to begin with; what ordering of essentially randomness (part of the point) makes sense?
What about the hi/lo algorithm as a middle ground?
In short, and I hope I don't oversimplify, each "shard" or "cluster" in the database gets a "block" of ids it can then go and assign on their own, the sequential "atomic" increase happens only once per hi "block", lowering the contention.
This gives you nice integers, incremental-ish most of the time.
I like the notion of integers internally and UIID (as integers of course! I would have never saved one as a varchar, I swear! ok, I was a noob... I deserve to be shamed)
Great post all in all!
If the IDs are UUID, then the easiest way to fix the values is to drop the index and re-create it, making all of the other data in the index unavailable as it's being recreated.
The less-easy way with UUIDs is to select just the broken events, create new patched events, delete the old events, and insert the new ones in the right index. But you'd have to branch off of your regular indexing logic to do this, probably writing a separate script. Of course if you make a mistake, you may end up with either duplicate documents or loss of data, compounding the original problem.
So I agree, have IDs that are deterministic (that they can be recreated using some known formula and source data, for example: documenttype_externalid_timestamp).
We have multiple components over different stacks and id could be generated anywhere in the components. We had to live with either building unique id per table separate infrastructure or UUID. UUID works perfectly and with POSTGreSQL, it's just awesome.
I'm dealing with that from several vendors atm.
> Think twicein two cases of very large databases I have inherited at relatively large companies, this was exactly the implementation. Aside from the 9x cost in size, strings dont sort as fast as numbers because they rely on collation rules.
Eh, I've done that before because it made some interaction with Entity Framework easier (don't recall what now). Hasn't really mattered. The space for storing GUIDs has never been a meaningful constraint for anything I've ever worked on (9x is also nuts and assumes that your database uses 4 bytes per character). Sorting UUIDs is also generally uninteresting since they aren't meaningful by themselves. Maybe if you're doing lots of joins you might care about this.
This solves basically all the problems and we use it in production to number several tables with billions of events per day.
UUID-4, UUID-3 and UUID-5 are random (3 and 5 are hashes).
UUID-1 is time-based with the time leading, and you can often control the sequence (14 bits) and nodeid (48 bits) fields to be used as whatever you want to avoid collisions.
If I follow my advice, the type of an ID is an implementation detail of the persistence layer and/or service endpoint.
Is that normal practice? Their DBA was insisting that its normal.
If you are building mobile apps that sync state, UUIDs make your life so much easier. Optimistically perform writes locally, then perform writes remotely and retry on exponential backoff in case of a network error.
Each of the clients reserve a chunk of Lo numbers, and increment the Hi number. Basically, they would pre-allocate a chunk of id ranges, and this allowed good distributed id allocation performance, while somewhat keeping local ordering.
Client generated ids are very useful to do.
1. Store uuids in a uuid field. Why starting the article with such a trivial finding that a text field is not optimal.
2. Use sequential uuids.
3. Several benchmarks have shown that the performace hit is minimal.
4. The only way to communicate with ids is to copy and paste them. Never try to memorize, talk about them or type them.
it's nicer than using UUIDs because the strings are much shorter.
I don't think this is a real problem. If you're relying on your ID's being "unguessable" (and introducing engineering complexity to that end) for security you've already failed.
What do you think about such setup?
Database coder reinvents interned atoms.
(10 million new rows everyday)
Most of the drawbacks discussed don't exist if you're using a key value store.
How does apple not expect that annoying developers with their app store process (so much so that things like this exist: https://fastlane.tools/), AND charging them 30% AND apparently not actually reviewing anything about the apps making it into their store isn't going to eventually drive people away from it?
(Why yes, I am cranky over the amount of hoops I had to jump through to get to the point of asking apple for permission to put my beta on my co-founder's iPhone)
#2 - Average computer/phone users are willfully ignorant. I would say stupid, but that's a judgement call (even though I think it's true). Someone with knowledge can advise them, but they cannot be bothered with all that fuss. They'd rather ignore sound advice and push buttons. After all, look at the who runs the country and the complacence of many of its people.
Have you ever had a friend who was a lawyer? Did you ever get some traffic ticket and think, "Hey, I'll ask Bob if he can help me handle this!"? I'm guilty of this once in a while. But "average users" are guilty of doing this to technical people all the fucking time. And when we advise them of behaviors to change to avoid future incidents, they nod and agree, but then repeat the stupid behavior later.
Sorry for the rant, but perhaps it's time to just start replying to scammed/screwed users with, "Oh wow, that's really unfortunate. I guess you'll have to go buy a new phone/computer." Maybe that will jar them into actually using their brains.
* Edit for wine-related typos.
Also, do people still use the App Store? I don't think I have casually browsed for apps in 5 years or more.
How long will apple allow this? At the very least it should be impossible to bid on trademarked terms, and no ad should ever outrank an exact match result.
 Or so I have heard ... from a friend
I also had another app that was accepted into the app store then when I pushed an update release I was informed that my logo had to change because it used Apple's camera emoji. I only did this because another popular app did the same thing (down for lunch). In order to stay compliant, I had to change my logo.
I'm fine with said rules existing as in theory they are meant to protect lay customers from junk like this. How on earth did this thing make it through a review process that's so hard on some apps?
I wish Apple would apply it's rules and vetting with more consistency.
I've never done it, either. I clearly remember the only few times I clicked on AdSense ads - once by mistake, and was extremely annoyed at the results (it was a sort of list like search results), and 2-3 times to test my own AdSense ads (yeah, against ToS).
Yet AdSense is raking in billions. I've always wondered who actually clicks on the ads :D
How did this app get through that?
I get why people do it, but it's sad that they do.
Never, I guess.
Little distinction between ads and search results? No filtering or approval for ads? Scammy $100/week subscriptions for nothing? Meanwhile you're not allowed to make fun of the presidents elbows or whatever. Come on.
As a long time Android user (and no I wans't happy for most parts; and I wanted to taste the iOS waters both as an user and a mobile dev) who recently moved to an iPhone SE I feel really disappointed.
Nice into the rabbit hole though, should see how bad it gets with VMs.
Which is why the XQuartz/&c. user experience on macOS really really surprised me. It's absolutely unusable. Inkscape for macOS basically may was well not exist as far as my experience with it goes.
Are there other comparable GTK+ apps that work well under macOS or is this a common story?
How in the heck did Canonical squander such an incredible opportunity to be the de facto standard for Ubuntu/FOSS code hosting by letting Launchpad stale so badly?
They freaking built it into their distribution of apt with PPA shortcuts, etc.
They seem to want to differentiate themselves as (e.g. "not photoshop" in gimp's case) but seem to equate that with "ignoring good ui/ux design".
We found it easier to grow and expand all over the world and didn't grow as much in the Bay Area as thought. Currently only 20-30 people of our 550+ live in Bay Area
Also as far as space goes, that is just one photo of the downstairs area of the space. You can see more at https://automattic.com/lounge/ and some early shots here https://customspaces.com/photo/uklO4BLxis/
P.S. I'm the guy in the green shirt in the photo, woo hoo!
Of my past work places--death star cube farms in old silicon valley to tiny rooms in sweltering Berkeley summers to shiny live/work lofts to giant sprawling disneyland like campus to noisy hipster coffee shops--that WordPress office would be up there in terms of a good place to work at.
The real story is the upward trend that if you give an inch, your employees will take a foot. If you offer telecommute, workers will not show up.
I've been freelancing and telecommuting the past five years. I've built my workstyle around chat bubbles, slack channels, video calls, and emails whether 2PM or 2AM.
I've built my lifestyle around that. As in I work around my life. Things just... get done without a direct measure of productivity anymore.
Sitting somewhere from 9 to 5 is like watching TV from the 2000's, ordering Netflix DVDs when we live in the 2010's with streaming Netflix.
And as one disappear, so does another and another. When you look around and realize no one else is there anymore it just becomes a ghost town while the virtual water cooler becomes more and more vibrant.
No ones goes to the office anymore, it's too lonely.
I think the benefits of working remotely are still poorly understood, and long-term the companies that are being built remote-first are going to have a significant engineering advantage over those that bolt remote working on after the fact.
Now spare a thought for those of us sweating in the digital wasteland that is Australia.
Every so often I have to walk over to my fridge and nudge my 4G modem to improve the signal strength. I have a script running 'round the clock to reset the darn thing if the connection drops completely (this somehow it fixes it). I need the 4G connection because the copper wire to my house is so broken it can no longer support an ADSL signal.
Fibre is apparently coming in like... 2019? It is expected to run at a maximum of 25Mbps.
Needless to say, remote work is not exactly on the cards.
I think being remote with an office setup is the best you can get. I can go in at any time I want, and still have the nice environment to work from of.
Being remote doesn't necessarily mean no offices.
I now have a quiet, private space to work, and a nice 5-6 minute bicycle commute :D
It costs a little bit (~$300/mo for the space & utilities - yay for small-town-Ohio pricing), but it's totally worth it.
No the goal is to reduce head count with out laying people off. Companies that go from Remote to Non-Remote do it because it is an easy way to reduce head count with out having to Lay people off, it is a methodology to force people to look for work elsewhere.
People that can not relocate or have built their life around working from home can not or will not make the transition back to working in an office easily. As such they will seek out employment that better fits their needs which is ultimately these companies goal because they want to avoid that "XX Company is laying off X,XXX people in the next quarter" headlines
We either work out of the Airbnb we rent or a cafe. In some cities we were close to a reasonably priced co-working space and would work out of there.
The big draw for me has been the flexibility. We try as hard as possible to do asynchronous work, so some days I will take a few hour break in the middle of the day and go do something, and then work later into the evening.
I can understand occasionally working out of a coffee shop. But who does this all the time and remains productive? And is it really fair to the coffee shop?
Even better would be if this low density land could be incorporated into the huge 667 Folsom office/residential project planned next door. You could build 50,000+ sqft on that large lot and help both the office and housing shoartage. Unfortunately SF's planning process is so slow and uncertain it is probably too late even if the owner and tenants agreed.
Is anything replacing the workplace as the form of community for people or is that something that is just being lost?
Maintaining a 15,000 square feet office in that area for the amount of employees seems oversized in any case.
Of course there are plenty of situations where talking face to face is more informative, but I often find that to be rare.
Communicating via text has the added benefit of documentation and allows you to think about what you are actually writing. I find describing what I plan to do with a client via text helps me organize my thinking.
I work in data analysis though. So maybe this doesn't apply to other fields.
There are countless researches clearly saying that open spaces are bad for productivity yet for some reason they always win. And it's easy to see why, you only have to throw buzzwords like collaboration, team-work, open ... and done.
It has been "In Review" for a suspiciously long time now. So I think it might be testing the application of these updated policies.
I have often submitted updates to App Review which include the ability to download and install executable code (along with review notes detailing my reasoning) with the knowledge that they would be rejected. I have also appealed Apple's rejections in order to effect a change in policy for the App Store. At some point during phone calls with the reviewers they told me they were "advocating for policy change internally on my behalf" even if they couldn't approve my app right now. I'm so glad policy has changed now.
- The absence of a really good typing story. The 12.9 iPad Pro with smart keyboard is nice for typing text but terrible for moving the cursor around. It's agonizingly slow to do it with keyboard (highlighting is worse, for some reason) and inaccurate to do it with finger/fiddly to do it with Pencil.
The only text editor with vim keybindings (an absolute must in an environment where it's hard to move the cursor normally...) of which I'm aware is Buffer, while the only text editor with both good syntax highlighting and good github integration (via Working Copy) is Textastic. Honestly, I really wish one of those two would just buy the other so that I could have both.
- The absence of a really good ssh story. Prompt is nice, but for some reason, whenever I try to SSH into anything, there's so much latency that it is really painful to actually do anything. Maybe I just have slow network connections? But anyway, so much for just coding on a linode or something in vim.
I think this could really help a lot of students for what it is, and I hope it does well in that regard.
Whatever the provider, I really hate those walled gardens where what you can deliver or not is at the whims of a company whose interest is not always aligned with yours. I understand being on them is necessary due to how large their market are, but this is really not where I hoped we would be fifteen years ago.
I guess I'm merely venting, and daydreaming about what could have been, "if only"...
The day I can run and write Python natively on iOS is the day I buy an iPad Pro. Right now there are some good ssh clients and I can write code from a terminal, but pros of the device are not worth that tradeoff right now IMO.
I like the safety of the iOS walled garden but I also see real value in complex IDEs like IntelliJ running on iPad Pros.
The degree of paternalism is astounding.
Animation CPU Studio will be published soon.
This title is somewhat confusing - it makes it sound as though educational apps and dev tools somehow weren't allowed to execute code before, which doesn't make any sense.
I hope that we can now expect to get this feature, soon.
What is happening to hacker culture? I think as influx of new programmers increase, awareness on the culture's ethos of freedom, liberty, anti-authoritarianism, anti corporatism has to be increased.
Or we will have people loving to be jailed by their benevolent overlords in "apple/google/facebook/etc"
It wouldn't be a big stretch to say that 90% of quantitative hedge funds use Numpy in some fashion, whether its directly, or via a library that sits on top of it like pandas or tensorflow.
I can't think of a more ubiquitous library in the financial space, maybe QuicFix (http://www.quickfixengine.org/)...
Maybe numpy's problem is visibility?
Possibly it does its job so well that people don't know they are using it when they use library libraries like scikit learn and Pandas?
That being said, I do wonder if numpy is the most appropriate recipient. In my experience with data science, the tool that would benefit the most is not numpy, but pandas. While data scientists rarely use numpy directly, every data scientist I know who uses pandas says they are constantly having to google how to do things due to a somewhat confusing and inconsistent API. I use pandas at work every day and I'm always looking stuff up, particularly when it comes to confusing multi-indexes. In contrast, I rarely use R's dplyr at work, but the API is so natural that I hardly ever need to look things up. I would love if pandas could make a full-throated commitment to a more dplyr-like API.
Nothing against pandas -- I know the devs are selflessly working very hard hard. It's just that it seems there is more bang for the buck there.
Numpy is an amazing library, and it's basically Python's "killer app." The fact that you can seamlessly blend numerical/data science computing with more general web applications is what makes Python great.
A side remark, people often say how great the US / north America is for entrepreneurs, compared to (continental) Europe where there is a lot of red tape and regulations. But in my opinion, if I were to do this in Germany there is no way ALDI (whom trader Joe's belongs to iirc) could sue me out of business. Not even with the old frivolous "we are wrong but you can't afford the defense" trick. There is just so much legal uncertainty in NA that it would give me nightmares doing business there.
I can say that this does make me upset at Trader Joe's, and I will be considering where else I can spend my money.
They could have worked with this guy, eventually set up a Trader Joe's in Canada, and then offered to let this guy run it. That would have been better for their brand, in my view.
I care about what companies do. Costco hires employees and treats them well. It pays above average, and it hires and keeps on people with disabilities and injuries, even if they can't do everything someone else can do. It makes me feel good to shop there. And it's employees are loyal, hard working, happy and friendly, and they have less pilferage then other stores.
This idea that a company has a duty to be a dick is silly. Companies should care about their brand, and about being a good corporate citizen.
> Defendant Michael Norman Hallatt purchased TraderJoes-branded goods in Washington State, transported themto Canada, and resold them there in a store he designed tomimic a Trader Joes store. Trader Joes sued under theLanham Act and Washington law.
> It is uncontestedthat Defendant Michael Norman Hallatt purchases TraderJoes-branded goods in Washington state, transports them toCanada, and resells them there in a store he designed tomimic a Trader Joes store.
Emphasis mine and it's a big deal. Trader Joe's would have had a hell of a time bringing a suit if it would be called Hallat's Little Shack and would look like any random grocery store.
He should have realized the need, and done things like match their product mix with his own brands, work on making the store's own feel, and dampened direct association to Trader Joe's. He didn't and it bit him in the ass. No sympathy here.
I mean, he did change his store name to Pirate Joes (from the far more ambiguous Transilvania Trading) and his actions seem to betray less charitable motivations than his words would lead you to believe ("This is not a business I should be doing from a personal profitability standpoint - https://www.theguardian.com/world/2014/nov/21/pirate-joes-tr...)
That said, seems like Trader Joe's missed an opportunity for a win-win partnership with someone who had already developed rudimentary logistics to meet a demonstrated demand. But then it doesn't surprise me based on my 30+ years shopping at Trader Joe's: I would never describe them as innovative, instead I'd say they are very focused on what they've been doing well for decades.
From my perspective: every product sold in Canada was purchased in the U.S. so... if anything, this Pirate Joe fellow has provided additional sales for Trader Joes and proved that there is demand for Trader Joe's products in Canada at an incredible 40% markup!
If they're not interested in servicing Canada, would it not be to Trader Joe's advantage to enter a formal franchising or wholesaling agreement with Pirate Joe?
There must be more to this story in terms of Trader Joes objectives as opposed to Pirate Joe's methods or the legal proceedings.
Then again it kind of annoys me that TJ's just didn't open a damn store in Canada. And if they don't want to do that then why not just look the other way while someone else took on the risk of importing their products into another country?
This certainly wasn't a trademark issue. Trader vs Pirate. There was no question this store wasn't run by Trader Joes/Aldi North. They were buying in bulk to stock a store where they couldn't normally get the goods. Reselling should be 100% A-OK. Any trademarks go along with the products. And as far as I would guess, the grocer certainly wasn't tampering with anything - if (s)he was, they'd go out of business quick.
This is just normal SLAPP-style punitive legal actions that a large monied corporation can do to stop the little guy from doing legal behaviors that they don't like.
The only reason anyone's surprised or outrage is that the store feels like a small, homey, good natured place full of organic this and that that's lower priced than you'd expect. That might have been true, 40 years ago. For a store that had the same name, but was a different entity entirely.
Trader Joe's now is just a giant marketing and packaging front for 70 billion dollar a year Aldi, a multinational chain. It's a corporation. None of this behavior surprises me at all.
If the person wants to order 10,000 palettes of cookies at retail price, why wouldn't you sell the cookies to the person? He's not stealing from the back of the store, he's paying full price. I'm very confused why Trader Joe's would not have created a direct connection with the guy.
This reminds me of major services cutting off API access because they thought they could do it better in-house. Just HIRE the person doing your own service better in a different way.
Maybe they see Target Canada failure and are scared away by that?
The main draw to Trader Joe's is that its part of the journey across the line. This week I'll be doing this same old routine -- pick up some packages at the mail place ($2 per package), hit up a few grocery stores for different hot sauces and staples (including condensed milk in a squeeze tube), have lunch in Bellingham, go for a walk around Fairhaven, then return home.
Trader Joe's is part of that journey, much like Target (who had a massive, depressing attempt to break into Canada). Strip away that special-trip aspect, and all you really have is another grocery store with a few exceptional items.
Why didn't he just create his own store with his own brand and mimic the Trader Joes products and aesthetic? He could buy goods in bulk at much lower prices. He doesn't have to worry (much) about legal issues or spend money on them.
Clearly demand was so high he could still get away with charging very high prices.
I'm sorry? Trader Joes, in at least 4 locations I've seen in California, does special signs and displays the week before Burning Man to market to Burners. Where is this writer from?
Right now he would simply stop buying the other products while having his own brand.
If I go to DisneyWorld, purchase a Mickey Mouse doll, an take it home. I have the right to do with that doll whatever I want: burn it, give it to my daughter, or resell it at whatever price I see fit.
However, I don't believe I have to right to go - as an agent of another (presumed competitor), purchase that same doll, and then resell it in my own store. I have no resell agreement with Disney to do so. In a typical reseller arrangement, wouldn't a store (e.g. Target) have an agreement with Disney to purchase bulk product for resell, presumably at a reduced price, but also under strict guidelines as to how it could do so? For example: cannot be sold above a certain price, cannot be sold next to adult content, etc.
On a side note: I have to believe that (while not a TJ problem or related to the lawsuit) there were other issues with what Pirate Joe's was doing related to imports, possible tariffs not being adhered to, etc.
Taking a literal step away tends to help. I've often realized new approaches or epiphanies when mulling a problem while walking or in the subway.
Completely with you on the 'sucking up and sending a kind helpful response'. Snark does not pay. It makes no sense to snap at a user.
Regarding the other point about first-time users. I have a slightly related theory.
When you design something, design it for someone who has the attention span of a two-year-old. Not because your app is going to be used by a two-year-old. But because that is how much mental bandwidth a user is going to give you. Your user is probably busy or just likes to multi-task.
Working that much harder on the UI pays off, or at least prevents a disaster.
This is generally true, but it seems a bit like applying an Enterprise view of sales to a market of minnow sized budgets. It reinforces app consumers' view that apps should only charge for marginal value, not core value or the biggest value. This sort of "freemium" model leads to basically a market of pure crap with extremely rare gems.
Edit: I'm not dumping on the author, here. Were I to "do mobile" I'd probably take a similar approach because it clearly works.
In this app's case, it's about re-imagining an existing function - timetables. The designer knows that user experience is everything, and because of this he's willing to scrap everything if need to. And even when this happens, it isn't exactly waste as you understand the problem deeper and come to the design of an even better solution.
Sure you can argue that an MVP can bring about those design iterations. Keeps your focus on the users too. But arguably the market for this type of product is very active - though not necessarily competitive. So rather than get buried with the hundred others, it needs to shine right from the beginning.
I'm reality most of us don't really have more than a few shots except in those rare cases of the most trivial apps.
It still remains "a lot", US$ 500,000 per year, not exactly peanuts IMHO.
EDIT:Ah, no wait, I misread the article, he got a handful of downlads when the app was US$1, the 3 million downloads are since it was made free/freeware.
unless -- based on personal experience -- if it's treated as suspicious by the local police/neighbors, even if its a skinny, geeky-looking, white male who goes out walking alone late at night.
if I had a nickel for every time I've been harassed by police or local do-gooders, I'd have a lot of those nickels. and I'm not even of the demographic that PC-ness says should be oppressed. (ostensibly: black+male, or male+gay, or non-white, or female, or mean-faced, or weapon-carrying, etc. in reality: straight white male, innocent, no weapons, not in a gang, no drugs, etc.) "why are you walking alone at this time? why are you looking at things? implied: are you a terrorist? a pedophile? explain immediately!"
We do not (always) live in an intellectual-friendly culture. At least not in the USA, 2017. We (might, often) live in a small-minded, hyper-stereotyped, very ignorant local culture. Obviously it depends on precisely where you live. SF on Friday at 8pm? very different than Kansas, small town, Wednesday, etc.
not even joking. (And I submit this knowing it's not a HN-hivemind/PC-aligned viewpoint, and thus will be downvoted. I do not care anymore.)
The site in general is a beautiful work of art, a great blend of attention to detail with comedy of computing in that era.
https://en.wikipedia.org/wiki/Lenna - tl;dr is this iconic test picture for computer imaging was a cropped Playboy centerfold from 1972. I've just finished a PhD which included a fair bit of image processing, but I was unaware of the story behind this iconic image.
Title: RPG MO
(Don't leave a space before iframe in the command)
Is this open source? So we could see how it was made?
Accidental "works best in browser X" 90s reference right there.
I find Safari superior to every other browser on any platform in every possible metric except for dev tools, which took a nose dive when they ditched the open source WebKit one for this calamity.
- Half Life 3
- Defrag <3.
- Running Windows93 inside Windows93 inside Windows93 inside Windows93...
A work of art, indeed. Kudos!
now it's crashed and won't reload.
is there a work around for my workflow?
Then I realized everything is written with web technology.
It's also quite buggy (chrome/linux) which adds to the whole Windows 9x feeling. Not sure if intentional but well done anyhow!
Given that kind of zealotry, it irks me that you can launch an infinite amount of nested "Virtual PCs". Obviously it makes for some fun screenshots and is technically impressive in itself, but Windows early on never allowed you to run Virtual PC inside Virtual PC. So this is clearly wrong!
In short, not considering OCD, where do I file the bug-report? :)
I wonder how many hours I could waste looking for more Easter eggs ;]
Otherwise, kudos to the devs for creating this amazing work of art!
I'd like to throw some event handlers on "Puke Data" to allow changes to the dsp graph.
Serious hard work went into this site.
dir is not defined
If there's one lawyer in town, they drive a Chevrolet. If there are two lawyers in town, they both drive Cadillacs.
Basically, there are two approaches the plaintiff might take here. The simplest is to cite the doctrine of equivalents. This is basically the notion that if you do the same thing in the same way for the same purpose, then it's the same process, even though you are using digital instructions instead of logic gates. The legal theory here is pretty well settled. The problem is that you'd need to justify that digital instructions are obviously equivalent to logic gates, and a skilled professional would have equated them at the time of the patent's filing.
The other approach is to argue that an emulator actually is a processor, and therefore fits the literal claims of the patent. The explanation for this is pretty well-established: it's literally the Church-Turing Thesis. However, the viability of this argument depends on the language of the patent claims. Also, it's hard enough to explain the C-T Thesis to CS students. My undergrad had an entire 1-credit-equivalent course that basically just covered this and the decidability problem. Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Now, Intel certainly has enough resources to do both of these things (and they may also have precedent to cite, that didn't exist back then or that wasn't relevant to that case). Don't take this as an opinion on any possible result, it's just information such as I remember it.
- https://en.wikipedia.org/wiki/Doctrine_of_equivalents- https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
They no doubt have been filing additional patents over the years. But I'm sure MS and Qualcomm have plenty of their own patents to bargain with.
Also their warning could backfire if it gives Microsoft one more reason to finally walk away from x86 compatibility... not that this is likely to happen anytime soon.
> AMD made SSE2 a mandatory part of its 64-bit AMD64 extension, which means that virtually every chip that's been sold over the last decade or more will include SSE2 support. [...] That's a problem, because the SSE family is also new enoughthe various SSE extensions were introduced between 1999 and 2007that any patents covering it will still be in force.
AMD64 requires SSE2 which was introduced in 2001, right? So isn't it just 1 year until Microsoft can put in what's required for the AMD64 architecture?
Scorched earth policy will likely not be defensible under fair use law. Reverse engineering for compatibility has a few precedents.
I mean, Apple and Samsung had a billion dollar lawsuit while Samsung chips were still in iPhones. It's certainly precedented to sue a corporation you're actively doing business with.
Intel's strategy of going after other hardware companies may not translate neatly to emulators.
QEMU emulates X86 chips as does other emulators. I wonder how those are effected?
"if WinARM can run Wintel software but still offer lower prices, better battery life, lower weight, or similar, Intel's dominance of the laptop space is no longer assured."
Peter. My man. I laughed. I cried.
For the millionth time, the ARM ISA does not magically confer any sort of performance or efficiency advantage, at least not that matters in the billion+ transistor SoC regime. (I will include some relevant links to ancient articles of mine about magical ARM performance elves later.) ARM processors are more power efficient because they do less work per unit time. Once they're as performant as x86, they'll be operating in roughly the same power envelope. (Spare the Geekbench scores... I can't even. I have ancient published rants about that, too).
Anyway, given that all of this is the case, it is preposterous to imagine that an ARM processor that's running emulated(!!!) x86 code will be at anything but a serious performance/watt disadvantage over a comparable x86 part.
This brings me to another point: Transmeta didn't die because of patents. Transmeta died because "let's run x86 in emulation" is not a long-term business plan, for anybody. It sucks. I have ancient published rants on this topic, too, but the nutshell is that when you run code in emulation, you have to take up a bunch of cache space and bus bandwidth with the translated code, and those two things are extremely important for performance. You just can't be translating code and then stashing it in valuable close-to-the-decoder memory and/or shuffling it around the memory hierarchy without taking a major hit.
So to recap, x86 emulation on ARM is not a threat to Intel's performance/watt proposition -- not even a little teensy bit in any universe where the present laws of physics apply. To think otherwise is to believe untrue and magical things about ISAs.
HOWEVER, x86-on-ARM via emulation could still be a threat to Intel in a world where, despite its disadvantages, it's still Good Enough to be worth doing for systems integrators who would love to stop propping up Intel's fat fat fat margins and jump over to the much cheaper (i.e. non-monopoly) ARM world. Microsoft, Apple, and pretty much anybody who's sick of paying Intel's markup on CPUs (by which I mean, they'd rather charge the same price and pocket that money themselves) would like to be able to say sayonara to x86.
The ARM smart device world looks mighty good, because there are a bunch of places where you can buy ARM parts, and prices (and ARM vendor margins) are low. It's paradise compared to x86 land, from a unit cost perspective.
Finally, I'll end on a political note. It has been an eternity since there was a real anti-trust action taken against a major industry. Look at the amount of consolidation across various industries that has gone totally uncontested in the past 20 years. In our present political environment, an anti-trust action over x86 lock-in just isn't a realistic possibility, no matter how egregious the situation gets.
So Intel is very much in a position to fight as dirty as they need to in order to prevent systems integrators from moving to ARM and using emulation as a bridge. I read this blog post of theirs in that light -- they're putting everyone on notice that the old days of antitrust fears are long gone (for airlines, pharma, telecom... everybody, really), so they're going to move to protect their business accordingly.
Edit: forgot the links. In previous comments on exactly this issue I've included multiple, but here's a good one and I'll leave it at that: https://arstechnica.com/business/2011/02/nvidia-30-and-the-r...
I think this theory of infringement has to run into various thought-experiment problems such as : can I auto-translate that binary into some other instruction set, then execute the translated binary, without infringing Intel patents? (yes, surely) Is the translator now infringing Intel patents because it has to understand their ISA? (no, surely).
Now, can I incorporate that translator into my OS such that it can now execute i386 binaries by translating them to my new instruction set which I can execute either directly or by emulation? If so then I am now not infringing. Or did infringement suddenly manifest because I combined two non-infringing things (translator + emulator for my own translated ISA)?
Okay, got it. I'll make sure to account for that in my next CPU/device purchase.
It's quite possible I'm missing something vital here, of course.
And unless Qualcomm and Microsoft are working on a Hardware assisteed X86 emulation, this warning shot may be directed at somebody else.
My guess: Apple.
AMD licenses x86 patents to Qualcomm/MS to make x86 emulator better patent troll proof. In return, Qualcomm and AMD team up for better ARM server based processors. MS can sell more Windows/Windows Sever (sad).
I would love to see Dell, Lenovo and HP to switch exclusivly to Ryzen processors,
And switch to the new Naples CPU in all their Server/Storage systems
The rear peep sight on rifles take advantage of actual "optical effects", without any glass -- much like a pinhole camera can actually magnify images without any lenses or mirrors at all.
By simply providing an arbitrarily small "aperature" you're looking through in the rear, the front-rear sight alignment problem is not only capped at an upper bound of error (defined by the peephole size and sight radius), but the actual error from front-rear sight misalignment is visually magnified and centered through a fixed viewing point, making it vastly easier to keep the actual error near zero.
So generally, to achieve precision within the (small) upper bound of error with a peephole sight, all you need to do is place the front sight post on the target when looking through the rear peep sight. Even better precision is made much easier via a sort of "peephole camera" effect through the aperature of the rear sight.
I work from home and I live in the burbs so pistol or rifle shooting is not possible. However, I've gotten really hooked on shooting (of all things) my Red Ryder BB gun. It doesn't make a loud noise, it costs almost nothing to shoot, and it's surprisingly accurate for how inexpensive it is. These little BB guns have iron sights like the article discusses.
My favorite thing to shoot is little plastic bottles--particularly the ones that over-the-counter medication comes in. They're durable and make a nice popping noise when you hit them. I put them on little stakes in the back yard at about 10-15 yards and shoot at them from my deck. As I got better, I made up little games, like shooting them in a sequence and trying to get 100% accuracy. I find it easy to get back to writing code after doing this for five or ten minutes.
Competitive pistol shooters actually use several different sight picture styles.
In the speed styles of competitive shooting, the goal is to hit targets as fast as possible, so you want to make each shot in the "worst" way that will give you about a 95% chance of a hit. So for a close, low risk target, a shooter may look only at the target and ignore the sights, for a tiniest fraction more speed.
For most targets, the looking at the front sight is correct. Shooters tend to lock their upper body into one shape, then pivot it from target to target while shooting a string. This locks the rear sight in just the right place behind the front one. When the front sight is put on target, the rear sight is automatically in the right place. It's true that the target does become blurred a little when you do this.
Then for really far targets, you do have to bring your focus back a little farther, and see and care about both sights.
The sight picture is not the only thing that changes from target to target. You usually budget the amount of time spent for each shot.
Surprisingly, many pros know where their round will hit before it reaches the target. The time penalty for missing a shot is so high that it's almost always better to take a second shot in case of a miss. However, it takes a while for a pistol shot to reach the target, and for your eyes to see where it landed (plus you'd have to change your focus to look for it, then back again to your sights). To get around that, with practice, you can know in the moment you pull the trigger where the round went, and follow it up in about a twentieth of a second with another round.
In most competitive pistol matches, the sequence of targets to be shot on a given stage is not rigidly defined. There are often plenty of constraints (this group must be shot before these) or timing related constraints in some sports (shooting this target will cause a pair of targets to pop up in 1.2 seconds). Given this, there's a surprising amount of planning that goes into discovering the optimum run. The details of each shot are then worked out and mentally rehearsed.
For the use cases that really matter, you won't be taking well-aimed shots, you'll be trying to get rounds out of the weapon in the general direction of the threat as quickly as possible, in order to buy yourself some time and/or space.
The front sight rule is not just the best aiming mechanism for the reasons of geometry described in the article, it's also the quickest way to acquire a basic sight picture under stressful conditions.
And knowing when a gun is being handled safely will prevent many of the accidents that occur when the naive start handling a gun like they've seen done on television and film.
Unlike camera lenses, our eyes can't easily focus on an arbitrary distance without an object being present there. Perhaps the front sight is working as an approximation of the hyperfocal distance.
A constricted pupil (from daylight) has a much greater depth of field than a dilated one (from darkness). So everything will appear sharper in the light of day.
Do at least some practice in low light conditions.
This is still true, but pistol red dot sights are becoming more prevalent.
is the "two sights" here the rear sight which has two posts, or the two sites as in front sight + rear sight?
several pages of reading and then .. an ambiguously worded conclusion.
I literally only made an account to post about how absurd and out of place this article is. If I wanted some second amendment lovers blog (and I don't), I'd simply find one.
Strike one, "hacker news". Strike one.
Thank god for the HIDE option, you second amendment freaks are everywhere. Back under the bridge you go, losers.
That's because it doesn't factorize the input into separate meaningful parts. The next step in LSTMs will be to operate over relational graphs so they only have to learn function and not structure at the same time. That way they will be able to generalize more between different situations and be much more useful.
Graphs can be represented as adjacency matrices and data as vectors. By multiplying vector with matrix, you can do graph computation. Recurring graph computations are a lot like LSTMs. That's why I think LSTMs are going to become more invariant to permutation and object composition in the future, by using graph data representation instead of flat euclidean vectors, and typed data instead of untyped data. So they are going to become strongly typed, graph RNNs. With such toys we can do visual and text based reasoning, and physical simulation.
Instead of handwaving about "forgetting", it is IMO better to understand the problem of vanishing gradients and how can forget gates actually help with them.
And Jrgen Schmidhuber, the inventor of LSTM, is a co-author of the RHN paper.
It's well understood that CFGs can not be induced from examples. Which accounts for the fact that LSTMs cannot learn "counting" in this manner, nor indeed can any other learning method that learns from examples.
 "Strings generated from"
 The same goes for any formal grammars other than finite ones, as in simpler than regular.
Is anyone working with LSTMs in a production setting? Any tips on what are the biggest challenges?
Jeremy Howard said in fast.ai course that in the applied setting, simpler GRUs work much better and has replaced LSTMs. Comments about this?
You would think an article like this would define LSTM somewhere.
- Microsoft Word
- Various proprietary WSYWIG that compiles to HTML
- Raw HTML
- Markdown (several flavors)
With nearly every kind of migration, there are numerous pain points. The "raw" formats are a nightmare to edit and update, and the compiled ones require several hours of changing syntax, image locations, etc.
I've been getting so tired of having to re-do stuff on different platforms that more of my docs are starting as Plaintext and then written in pseudocode markup for areas that I know will change on every platform (e.g. generating a table of contents, image tags, etc).
Having just coded an entire website from scratch that was basically just documentation, Markdown comes remarkably close to doing what I want, except when the common format fails to meet my needs, which forces me to then have to switch to a specific flavor of Markdown in order to get something as basic as tables.
The docs of mine that seem most resilient to platform shifts (other than plaintext) are the ones that are written in or compiled to longstanding formats like LaTeX or HTML.
So perhaps my takeaway is, write in something readable that compiles to something widely available. That will provide the least headache.
Here is an example of using the IPython kernel to evaluate inline Python code within an OrgMode document.
More information on how to create multi-language notebooks with OrgMode Babel here
Plain text: so that no one can own the distribution method.
Plain text: so that no one can own the creation method.
Plain text: so normal people can recover data even when partially corrupted.
Plain text: so you aren't forced to see jarring ads.
Plain text: so that there are no tracking pixels.
Plain text: because connecting information with hyperlinks doesn't require all of HTML or even computers.
Plain text: because it's good enough for metadata.
My future and knowledge is in YAML-fronted markdown and YAML metadata for binaries. Let's take back our data. Look out for Optik.io.
I think markup languages like markdown which are both fairly easy to convert into other formats and deliciously human readable are the way to go.
Remember that one of the major breakthroughs of the World Wide Web was that HTML meant documents were no longer plaintext.
We deserve better than this reductionist thinking. Constraints can breed innovation; but they can also just constrain.
That doesn't mean it has to be the distribution / consumption format.
One of the great things about something like Markdown is that it can be rendered to HTML trivially, to display video, equations, etc.
Same thing for ebooks, PDFs, whatever (thanks Pandoc!). It's also easy to translate between formats (e.g. .md, .org, .rst, etc.).
If a new format comes along that everyone wants, there's an extremely good chance that plain text can be rendered to it.
The reverse is not true.
There is a naive assumption that all platforms and operating systems will treat your text (everything is either text or binary before it is parsed into something else) equally. This is false. When when this fallacy becomes self-evident many developers will refuse to modify their assumptions in the belief that consuming software will figure it out properly. Sometimes that is true and sometimes will absolutely break your code/prose/data. Clearly that assumption carries a heavy risk, but this is just data at rest.
When it comes to data moving over a wire the risk substantially increases, because all software that processes that text may make custom modifications along the way. You don't see it so much when the protocol is primitive like HTTP, pub/sub, or RSS (but it still does happen frequently). There are many distribution protocols are that less primitive and absolutely will mutilate the formatting of your documents, such as email (which is why there are email attachments).
I have sent plain text files to some of my colleagues in the past (so that there is a 100% chance that they could read the file), and they were unable to open them because of this issue with choosing the default application, and asked me what app they needed to download to view the files.
I have a kid in middle school, and he has a tablet. These things are often pushed as "educational". Pop quiz: you walk by, and you need to determine, within a couple of seconds, if what he's doing is actually educational. Here's what you see:
lots of graphics, whizzing around the screen
black alphanumeric characters against a white background
Now, you don't actually know, but generally speaking, the second is a better indicator than the first.
I realized this applies to my own work as well. There are parts of my job that I consider extremely useful to the world, and parts that I really gotta wonder about.
Again, it's not a guarantee, but I'm starting to consider a very general guideline: if you are looking at symbols and alphanumeric characters, the odds that you are building something of lasting value is much higher than if you are looking at things with elaborate UI elements.
It's not 100%. My kid could be watching Citizen Kane and developing an interesting critical point of view. He could be reading 101 fart jokes. It's not a perfect match. There are worthy and unworthy things on both sides.
But as a general rule, for culture and career - if you're looking at plain text, that's a good sign.
You can run two versions of a markdown file or a LaTeX source file through a diff, and see what's been changed. Try that with a PDF or Word file or what have you.
As I like to keep my files in version control, I use plain text formats as much as possible.
Which is better?
I would say the second one is more effective.
(1) When youcompose text you want to compose first style later. Wysiwyg mixes the two and you end up with crappy spelling and half arsed formatting most thetime.
The post correctly notes that R Markdown files are plain text, but the benefits of such (version control) are not discussed by the OP.
For exchange and processing data (data != information), UTF-8 text with contextual formatting like Markdown, CSV, etc is nice as it is generally tool agnostic.
But for conveying information (information != data), plaintext sucks. That's why Markdown is a copout -- the formatting still matters... and humans don't perceive markup coding as well as the rendered result. Humans do better with visual queues when interpreting written data. Formatting, typesets, bullets, tables, graphs, etc all help use process and contextualize data.
If being able to reference information or data over time, where time > 10 years, you need to think about what you're doing. (Archivists do this professionally.) PDF is the quick path to address this for format-sensitive applications, as big & important institutions like the US Courts use PDF for their documents -- it isn't going anywhere. Big datasets are more complex to deal with... you have to decide whether the raw data should be preserved vs. the processed/analyzed data, etc.
The current content for education is good, but is definitely bandwidth-heavy and is tough to maintain. But dropping to plain text will force us let go a few things that otherwise make learning more effective.
I think HTML (or a WYSIWYG style editors - that seem like plain text, but can be powerful with images, videos, animations when required) also does the same thing like plain text,
- it is always compatible (I give it to you that it takes effort to run hifi stuff on browsers, but still better than plain text).
- is easy to mix and match
- is easy to maintain (thanks to many editors)
- is light weight (not compared with plain text of course)
- is forward compatible (not possible when all browsers decided to not support HTML in its current state).
I appreciate the thought behind bringing this up. I think writing something in plain English, which can then be turned into some super cool learning material that runs everywhere would be awesome. It helps both in solving the issues mentioned in the article at the same time keeps learning effective.
Is this UTF-8, latin-1, 7 bits ASCII?
The process took about a year. My own estimate is that we lost about 2-3 months to tool-related problems, without any benefit whatsoever. I have never in my 20 year career understood the point of word-processing tools, and never will.
And those tools make money. That is wrong.
When I think of lecture notes I think of two uses:
1. An informal reference/mnemonic for the lecturer. This use suggests a format that suits the particular lecturer (which may not necessarily be text, or even a digital format).
2. Potential answers to exam questions for situations where there are too few instructors (i.e., lecturer plus TAs) chasing too many students over too short a time for students to practice critical thinking.
Are there other uses? If not, I can imagine standardizing notes across schools could be a detriment by streamlining "plugging-and-chugging".
Let's agree on markdown/rich text + non-interactive media (images, sound, video)
I have been meaning to look into what alternatives are out there.
I'm in my 40s. Incredibly old for HN standards. And yet, I feel no nostalgia for the "good ol' times." I mean, don't get me wrong I'm sure there's a lot of things that set me apart from newer generations -- I don't get Snapchat at all ;) -- but I don't see me being happier by being put in a house set up to look and feel like the 90s/80s.
Is it maybe because we as programmers tend to be less prone to be stuck to the past? Just wondering
How we feel and what we think of ourselves affects our levels of Testosterone, Cortisol, Serotonin, etc. Even a 5 minute conversation can give you a T boost of 30%+ ... or believing that you're perceived as high status alters your Serotonin. Those hormones in turn make you more vital.
So who knows what was the reason... maybe more social interaction with strangers? Or simply putting their mind into a different, better place?
There were lots of things I could do in my 20s (e.g. refuse to use gasoline-powered city transportation, refuse to patronize places that used disposable cutlery, refuse to use non-free software, etc.) that I can't do when I'm in my 30s because people around me would think I'm a stubborn idiot, jeopardizing my career at a point where I have not yet established myself. It's very easy to tell a colleague, advisor, anyone at school that you're going to bike to the destination or take electric-powered transit [because you don't believe in a fossil fuel future]. It's very difficult to say the same thing to an investor, co-founder, employee, customer, or whoever is offering you a ride in their car, without feeling like an ass. I'm basically forced to be "normal" during work times and fit into the mould of society. I can only be myself on evenings and weekends.
I can only imagine how much more "being normal" I need to do if I had kids, pets, tenants, or whatever. I don't have any of those at the moment. The other night I was pondering over potential improvements to our music and mathematical notation systems while staring at the Milky Way. (I didn't come to anything conclusive, but I love thinking outside the boxes that society defines for us.)
10 years ago, I could truly be myself 24 hours a day. I was basically learning all kinds of things about the world by doing that. Now, I only get about 5 hours a day to be myself. The rest of the time, I need to conform. The lack of "me" time itself may be contribute to some degree of mental rot/aging, apart from the biological component.
Which is to say, I'm dubious as hell of this result: For something this click-baity, at this point in the history of psychology research, I'mma need some serious replication before I give itan ounce of belief.
I think the computing party is just getting started. Non-trivial domestic AI will be here within a couple of years, personal robotics 5-10 years after that.
The current ad mania sucks, but it's going to have to evolve or die.
I don't miss much of the past. Pocket phone computers, tablets, GPS, video calling, massive data storage, and the potential of renewables and distributed energy grids are all awesome. Like.
Even social has its moments.
The real problems are cultural and political. There's been some movement there, but not nearly enough. The system has nearly enough energy to go through a phase change soon, and that's when things will get really interesting.
Moi? The body and mind are both subject to: Use it or lose it. We also, as humans, tend to assimulate into the norm around us, be it smoking, obesity, and now I guess perhaps youth.
Finally, I have to wonder about the effects of essentially being on holiday. In addition, perhaps the group discussions energized them? That is instead of waiting to die, they had more reason to live? In any case, interesting.
But ultimately the end is the same. You can't reliably exercise your way to 90, even. The majority of people who are exceptionally fit die before reaching that milepost in the environment of the last 90 years of medical technology. The future of health and longevity in later life will be increasingly determined by medical technology, and nothing else. Aging is damage, and that damage can be repaired given suitable biotechnologies to do so.
DNA methylation patterns correlating strongly with age are a very promising tool when it comes to assessing treatments for the processes of aging. Companies offer various implementations now - see Osiris Green for a cheaper example, to pick one. In the SENS view of aging as accumulated molecular damage, epigenetic changes are a reaction to that damage; a secondary or later process in aging. We'll find out over the next few years how the rejuvenation therapy of senescent cell clearance does against this measure, now that things are moving along there.
But you shouldn't think it impossible to construct useful metrics of biological age more simply. There are a number of excellent papers from the past few years in which researchers assemble weighted algorithms using bloodwork, grip strength, and other simple tests as a basis into something that nears the level of discrimination of the epigenetic clock.
When it comes to a biomarker of aging, there are lots of promising candidates. Researchers will spend a lot of time arguing before they come to any sort of pseudo-standard for that task. Industry (today meaning the companies developing senolytic therapies for the clinic) will overtake them and, I'd wager, adopt one of the epigenetic clocks because it basically works well enough to get along with, and can be cheap in some forms.
'aging' is the word you are searching for
The really scary part was after dialing the number and encountering the operator, we were unable to hang up (any time we hung up and picked back up, the operator was still there, even after waiting about two minutes). Fortunately this was (a) at MIT which still had a central electromechanical telephone switch for student phone lines in the '80s and (b) I had keys to the switch as a student phone repair tech.
I still remember grabbing my keys, running over to the switch, and physically pulling the relay contacts to release the call and prevent a trace to our location in case that was the motivation for holding the line (nowadays traces are digital and instantaneous, but when looking at old-school electromechanical switches you really did need time to trace the call physically through the relays).
Yes, we were aware the operator was probably just messing with us by showing he could hold our line against our will to discourage us from calling again, but it still scared the crap out of us just in case.
There is also another service called WPS for cell phones where you get priority just by prefixing your number with *272, the only catch there is your specific phone needs to be enrolled.
But yeah - it's all the luck of the draw. Some phone people have had varying levels of luck with other things involving that area code as well: http://www.binrev.com/forums/index.php?/topic/48478-weird-71...
For example, this PDF explains a lot than anything present on HN or Wikipedia:http://chicagofirstdocs.org/resources/060912-GETS.pdf
Here's a doc that covers all US Federal emergency communications:https://www.dhs.gov/sites/default/files/publications/nifog-v...
I originally discovered this guy from HN and the audio recordings on that site are mesmerizing to me.
If a number was not an active customer it was put in an outbound call list to solicit long distance.
The best story i remember was when the navy wanted to know why we called one of their nuclear submarines. This implied that the right 10 random digits contacted a sub.
I suspect it got killed off because so many businesses were switching to cheapo, poorly-made, Winmodem-based PBXes that didn't recognize the area code.
808-248-0002 - "Your GETS call is being processed. Please hold."
I feel bad for that operator
Sounds like a major security problem, and during a crisis is especially when I would not like to have a buffer overrun.
Tablet PCs were way ahead of their time and suffered as a result imho. It was hard to find one that wasn't under-powered, and I suspect that it was a way to make them affordable, but hot-damn those things were cool.
Frankly I'd argue we've still to perfect that idea. We've got those "transformer" laptops now-a-days, but finding something with a decent digitizer has still been elusive; or at least it is to me.
Regardless, thanks for the fish Mr. Thacker, hope you enjoy your seat at the pantheon of computer gods.
Note that this change depends on the shared PID namespace support, which a larger, still-ongoing endeavour .
Coming into the k8s ecosystem with very little container experience has been a steep learning curve, and simple, concrete suggestions like this go a LONG way to leveling it out.
Feel free to take them for a spin and feedback welcome and appreciated.
Right now, we have a bunch of microservices. Most of them talk to our shared infrastructure. We started with single configuration file, which has grown to monstrous proportions, and is mounted on every pod as a config map.
What would be the correct approach? Multiple configmaps with redundant information are just as bad, if not worse.
Edit: oh, you kind of do. Well, it's not upcoming any more, it's in the latest Docker CE :)
If some of you are interested in Kubernetes GPU cluster for deep learning, this article might be good to read as well.https://news.ycombinator.com/item?id=14526807
The k8s blog has some as well:http://blog.kubernetes.io/2016/06/container-design-patterns....
i've seen this pattern before and it didnt make me feel very good. it reeks of unnecessary complexity.
Kind of.. but you can set `restartPolicy: Always` and will always restart in case of failure.
This was brought up in a thread last week about "As a female, how do I identify a good employer?"
The best answers basically said "work somewhere that has as boring a corporate culture as possible". Basically, work for a place where you are rated on your production and nothing else. Work elsewhere and things like "how late did you work?" -- a metric that is far easier for people without children to meet -- cease to matter.
Working late isn't the only thing (though it is a big one), but it tends to correlate with "immature HR practices" in general. Inclusivity is about recognizing that people have life configurations that differ from your own, and creating the space for those differences to exist.
I lost count of how many times, something innocent like not going to lunch with the team regularly (I'm a picky eater), or participating in whatever game the team was nuts about (foosball, or various exotic board games) turned into personnel issues where all of a sudden I was "unavailable to the team", or "distant and aloof" etc, even though my professional contributions were just fine or even stellar.
You can imagine how stressful it is to show up to work everyday wondering what bullshit non-work related nonsense is going to come up that day and require another stupid chat with your manager. And in the midst of that you're expected to keep up a cheerful demeanor and work well with the same assholes that keep bringing up this irrelevant crap because the fault in these interactions couldn't possibly be with them.
The day it becomes about the work, and not personal discomfort with new and differing points of view about communication and interaction, diversity at tech companies will become an after thought ... in a good way.
Yes! Yes! Yes!
I don't drink, and its kind of sad that I get to miss out sometimes because I don't go to the bar. Because I like to bike instead, why can't I not feel pressure to go to the bar and do my own thing after work?
Handling work stuff at Work I feel is the way to go.
Read this three times and I can't understand what it means. Can anyone "translate?"
On the overall topic, it seems really obvious in retrospect that removing formalities in the workplace turns the office into a social club and those who don't want to socialize are excluded. Its certainly an unintended consequence though.
I can certainly see the benefits of formality in the office now that I'm older.
If you only work a minimum number of hours within your field, you are unlikely to emerge as one of the peak achievers or thought leaders in your field. That's just because you learn more from experience, and working more hours gives you more experience.
You can extrapolate from there what this means for companies and individuals.
I am not at all saying that companies should ask people to work long hours. (I run a software company, and we are super-lax about hours, people showing up at the office, etc). But I am saying that if an individual wants to be an expert in a particular field, that person should probably work a lot (and probably wants to work a lot anyway, due to interest in the subject). This doesn't necessarily have to be at the company; it could be at home, on personal projects, whatever. But the deeper and more challenging the project is, the better you learn, and it's easier to have one project that is deep and challenging than somehow to have two in parallel. And if only one is deep and challenging, then you are sort of idling with half your time. So there are basically two paths to this kind of deep work: work for a company, make sure you get a project that's really good, and then work hard on it; or go do your own thing, make sure you have enough money somehow, and work hard on what interests you.
This also means that "work-life balance" is not a thing for experts the way it is for normal people. But that's fine, because for these kinds of experts their work is a serious part of their life and the two things are inseparable.
Of course if you don't feel this way about what you're working on, that it is a serious part of your life, then this strategy doesn't make sense; and I would not encourage people who don't feel this way (who are the majority of the population) to work that hard. I am just pointing out that there are some of us for whom a different life strategy is best.
Staunchly meritocratic online interaction and collaboration, from software development to messageboards, allows people to cultivate identities largely defined by their contributions, which is often distinct, or even at odds, from the identity they wish to demonstrate in their real life. In online spaces where individual contributors aren't restricted from speaking out against the leadership, this disconnect will manifest instead of being suppressed.
While I don't disagree with the author's recommendations and rationale, it's unfortunate that the OP's argument essentially reduces down to the fact that the less casual interaction between people, the more inclusivity will result. It's also re-framing the implied problem: the equality vs. equity debate. In the OP's view, the solution is to cultivate a minimalist, work-focused culture that solves the inclusivity question by avoiding it entirely. This is very much at odds with the approach that receives a lot more press these days, which seeks to prescriptively address inclusivity within its own problem-space.
The difference between a startup culture and a corporate culture is the difference between a creative company and a disciplined company. "Discipline" is like a swiss knife, something that can work anywhere and everywhere. Creativity only works in some places, in places that are desperate, in places that are still making basic decisions, in places where the problems are high and the solutions are few.
A disciplined company has no problem being acquired by a creative company. But a creative company has many problems when they start masquerading in a disciplined company. (Read: Microsoft acquires Company X and writes it off 5 years later.)
Working in a disciplined company is easy for most people. No manual required. Working in a creative company is difficult for most people but easy for creative people. Most foreigners or people with diverse minority backgrounds have a difficult time adapting to very social environments. They would rather stay strictly professional and confined to their work.
But here is the problem: what is the point of having diversity if social interaction is nil? How messed up is your social world if it does not include unsocial minorities?
There is a balance that is needed. Google started as creative and became more corporate and also became more "boring". (Sergey Brin's word)
> But above all I didnt have the cultural and social capital to know how to dress casual in the right way. My casual dressing was made of nerdy, unfashionable and cheap clothes: you could immediately say that I havent accomplished anything. And I didnt even know that there was a rich way to dress casual.
theres more art to looking sharp in casual attire than in a suit and tie!
> Tyler Cowen: Well, being a casual person myself, I'm very glad being casual is in vogue, and probably will stay in vogue. But what I find striking is societies with a lot of upward mobility often tend to have strict dress codes. So you see this today with Mormons, at Mormon businesses. You see it in Japan in its heyday years--you know, the businessman or journeyman suit, they more or less all looked the same. There's something about upward mobility where actually clothing is not that casual and one is being more formal in trying to impress; and that is a [?]. But the thing about being casual is it actually makes it harder for people to prove themselves. So, Bill Gates goes to a meeting and he may show up dressed very casually; but he's still Bill Gates--either everyone knows or if you really needed to, you could Google him. So there's a code of casual that's actually very difficult for, say, people from other cultures in America to master or demonstrate that's actually made signaling harder. Just that right way of looking casual is in a funny way more conformist than like the blue suit and tie, which you could do and then innovate around and try to climb to the top. So I find this disturbing, the more I think about it.
> But maybe that comes at a cost. If we set aside that desire and focus on what were really trying to do heremake good softwarethen maybe well open up some different possibilities. By constraining the number of things we have to agree on, and the number of hours we have to spend agreeing on them, we naturally open ourselves to a diverse world of talented people.
Much as we might wish otherwise, I think this article is right that informality and diversity are in tension (though I think it's massively wrong to conflate informality with long hours; it's very much possible to have a culture where you drink alcohol, play Rock Band, play board games, but still go home after your 35 hours/week). But having to give up informality would be a very heavy price. For me a comfortable life is the end and making good software is the means. But even if your goal is good software, looking at the past couple of decades of big professional companies being displaced by scrappy startups, informal organizations seem a lot better at producing good software.
Sometimes, to evolve, adapt and gain the edge, you have to be loose and unprofessional.
Sometimes, to survive a famine or a drought, you have to ruthlessly cut what isn't absolutely necessary.
These are the other phases of the business cycle that the author neglects. Professionalism, openness, and work life balance belong to a certain phase of the cycle. That phase does not come from nothing and it does not last forever.
Yesterday, I had a wide-ranging Slack conversation with some very nice people who patiently allowed this privileged white male to repeatedly touch the third rail of diversity and inclusion. That conversation led me to the realizations in this post. Ill thank them by not naming them, and by promising never to bring this up in their Slack channel again.
In other words, people openly hated on him for wanting to discuss something with them and get informed -- a white male in a position of power that few women or poc occupy. And all they can do is make him scared to bring it up again and act like the abuse they heaped upon him is some sort of privilege he didn't deserve or something.
I am so sick of women and people of color being openly hateful to people who were born the "wrong" gender and color to be part of the unfortunate many. Hello? Whining about how "it isn't my job to explain this stuff to you!" instead of being all "OMG! An opportunity to have a useful conversation with a white male who is actually curious about how the so-called other half live!" is part of the problem, not part of the solution.
(Before you auto-downvote this on the assumption that I am some overprivileged asshole man, please note I am a woman.)
That some companies with great effort manage to compete over the few female developers on the market doesn't prove every company could hire lots of female developers if only they changed their culture.
To be honest, personally, even if there were those hundreds of thousands of female developers supposedly driven away by bro culture, I would still maintain that people should have a right to create companies they enjoy working in. If some people want to work in T-Shirts and get drunk every night, it is their right to do so (if they can earn the money to sustain it).
Luckily not all companies are the same, so that people can apply to companies that suit their tastes.
If it weren't so, there wouldn't even be a need for hiring or job seeking to begin with. People could just apply to the next best company and be hired, likewise, companies could hire the next best applicant - because there would be no such issues as cultural fit or whatever. Not very realistic (source: I am not friends with everybody and not everybody is friends with me).
It is a problem that corporate America tries to optimize function by getting everyone closer and closer together with team building exercises and alignment of values.
Values are deeply personal and we should recognize that people are going to differ. Freedom of conscience is as basic as freedom of religion and important for the same reason.
If we keep work a professional space we maximize diversity of thought and life experience, which are ostensibly what the large push toward ethnic and gender diversity are a proxy for.
The reason Fog Creek works well, is because it's very smart people, who care about what they're doing. They care because it's a product company - they get to make decisions that impact the product. They feel a sense of ownership.
Contrast that with a sweat, uh sorry, I mean dev shop. Contrast that with doing contract work for big companies where you come in, leave 6 months later. Contrast that with start-ups that only exist because someone got free money.
Contrast that with shit maintenance work at big corps.
Does that about cover 95%, if not more, software jobs out there?
There is no fixing shit workplaces because the foundation is rotten. When you have no say, when you don't care about the product, when you move around every few years - yeah, it's shit culture.
There is no fixing that - most people long for a stable group of people they can make something happen with.
Most people are confused about how much work and dedication it takes to make something great. Most people's actions create what most people complain about and they don't even know it. There is no fixing it, there is only becoming good enough to either start your own Fog Creek, or be good enough to join one.
I don't see a problem with having companies with corporate culture and companies with startup culture side by side, just because I dislike the suite and tie culture doesn't mean I want it gone, however from reading the article I get the impression that the author wants the more liberal companies gone just because he doesn't like them.
Hmmmm... I agree at least in principle that one shouldn't be required to always hangout late nights after work. However, admittedly occasionally I think it's useful and it's helpful to understand your co-worker's motivations and spending a bit more time with co-workers sometimes is certainly reasonable and helps to build trust and respect amongst other co-workers.
Maybe people should be working only 30 hours a week and spending the other 10 hours just on team building.
I also think its useful to understand the social aspect of things because understanding motivations can help the team solve problems in a way that everyone will agree to.
I bet there's more than one out there.
Work/life balance doesn't "work" for a lot of people, a lot of types of work and a lot of lives. Astronauts, Presidents, Prophets . . . startup ceo's . . .
Like the overnight train that left me in an empty field some distance from the settlement, the process of economic development has for the most part bypassed the two hundred or so families that make up the village of Palanpur. They have remained poor, even by Indian standards: less than a third of the adults are literate, and most have endured the loss of a child to malnutrition or to illnesses that are long forgotten in other parts of the world. But for the occasional wristwatch, bicycle, or irrigation pump, Palanpur appears to be a timeless backwater, untouched by Indias cutting edge software industry and booming agricultural regions. Seeking to understand why, I approached a sharecropper and his three daughters weeding a small plot. The conversation eventually turned to the fact that Palanpur farmers sow their winter crops several weeks after the date at which yields would be maximized. The farmers do not doubt that earlier planting would give them larger harvests, but no one the farmer explained, is willing to be the first to plant, as the seeds on any lone plot would be quickly eaten by birds. I asked if a large group of farmers, perhaps relatives, had ever agreed to sow earlier, all planting on the same day to minimize losses. If we knew how to do that, he said, looking up from his hoe at me, we would not be poor.
1. how they didn't have pest problems if they planted in fractal patterns
2. but they did have pest problems if they didn't plant at the same time
Could someone kindly explain that in a little more depth?
Beyond the Introduction, this seems to just be the recommendations from the Covington report; the full report (per the Introduction ) was to cover (1) Ubers workplace environment as it related to the allegations of discrimination, harassment, and retaliation in Ms. Fowlers post; (2) whether the companys policies and practices were sufficient to prevent and properly address discrimination, harassment, and retaliation in the workplace; and (3) what steps Uber could take to ensure that its commitment to a diverse and inclusive workplace was reflected not only in the companys policies but made real in the experiences of each of Ubers employees.
This document only includes the part addressing (3), which implicitly indicates that the bottom line conclusion on (2) was no, but doesn't really provide any clear information on (1).
Edit: This is the executive symmary. It contains a lot of what to do. Going forward, you don't need to know so much how Uber got into the mess they're in. You need to know how to stay out of similar messes. This is a good plan for how to stay out of such messes.
But it doesn't tell us what they learned through the process.
"Uber should consider moving the catered dinner it offers to a time when this benefit can be utilized by a broader group of employees, including employees who have spouses or families waiting for them at home, and that signals an earlier end to the work day."
Note to Linux users: this PDF looks terrible without msttcorefonts installed. I guess MS Word neglects to embed fonts? Also, as someone who is not used to seeing documents generated by MS Word, I'm surprised at how bad the typography is in general (although maybe this due to user error...for example it looks like hyphenation might be disabled.)
Huffington: Theres a lot of data that shows when theres one woman on the board, its much more likely that there will be a second woman on the board
Bonderman: Actually what it shows is its much likely to be more talking
What a garbage fire.
I genuinely believe that people work hard because they believe what they do has a meaning, not because the company serves free dinner/beer/water at 7pm or 8:15pm. I don't understand why you should run a fast-paced startup like a non-profit. As someone who used to work in law firms, not only we don't have catered anything, we regularly stay till after 10pm and on-call during weekends/holidays so we make barely the same, if no less, than a first-year engineer. And you let a law firm make "better workplace culture" recommendations. I am so lost.
Is this thing on?
I don't understand why you should run a fast-paced startup as a non-profit. As someone who used to work in law firms, not only we don't have catered dinner, we regularly stay till after 10pm and on-call during the weekends/holidays so we make barely the same, if no less, than a first-year engineer. And you let a law firm make "better workplace culture" recommendations. I am so lost.
The most substantive recommendation, IMO, is that they suggest a COO that controls most of the day-to-day. It's a clear move to reduce the CEO's power, and most likely a path to remove the CEO in the future unless the CEO regains power, which is unlikely.
Every metric has the power for evil. This will be gamed, simply by measuring these things behaviour will be affected in unpredictable ways. I understand the challenge they're up against but I absolutely cringe at turning some of these subjective items into metrics.
This is a very important point to remember about subterranean tunnel systems. It is exactly what came to my mind when I watched the Boring Company video about a huge network of 3D tunnels. The tech press, which had probably never even covered a construction project yet alone tunnels, was basically like "what about earthquakes"? But tunnel collapse is not the primary safety issue.
It's fires. Smoke and toxic gases from fires spread very quickly through tunnels.
I am a huge fan of the concept, by the way, but I want to emphasize that most fatalities from traffic tunnels have been from fires (apart from ordinary traffic accidents). And Elon Musk has stated that what makes this vision feasible from a cost perspective is smaller tunnel diameters. Which makes air "communication" all the more accelerated and safety critical. Thus any vision of tunneling without detailing fire safety, evacuation systems and firefighter access is significantly incomplete, as these can add significant cost and fundamentally constrain designs.
Michael Punke, the author of the Revenant, wrote an excellent non-fiction book about it called "Fire and Brimstone: The North Butte Mining Disaster of 1917".
If you search for "stench gas" you will find some... interesting photos of the control panel for these systems:
(Something about buttons marked "release stench gas" and "release anti-stench gas" seem amusing in a rather comical fashion.)
Unlike carbon monoxide this compound had a very "this will kill you" smell to it.
Right now the smells are simple signals; I'm curious if a scent could be engineered to contain a language. Like paper and writing.
I'm asking for my story, in which sapient rats struggle against two-legged monsters with opposable thumbs.
Had it go off on a couple sites I've been on, and it's remarkable how it reaches every corner of the mine.
And if you want to buy some: http://www.zacon.ca/stench-gas.asp
They use wasabi.
Found it... https://youtu.be/kH5JhYsfNMA?t=1m10s
Nowadays my sister suffers claustrophobia & I feel most comfortable shoved into small crooks. But I think I was mostly offset by anxiety related to ventilation
Visited Timmins this May, another northern Ontario city. It was snowing
These big companies turn regular people into corporate livestock to serve the wealthy.
If you were to analyze Facebook as if it were a country, the wealth gap among employees would be atrocious - The top 1% would own maybe 99% of the wealth of the country and everyone else would earn a minuscule fraction of the total value that they produced.
If we let monopolies take over, then the economy of the world will start to mirror the economies within these large corporations.
What's worse is that the social aspects will also be mirrored. We will gradually lose freedom of speech, in the same way that employees of large corporations don't have the freedom to say what they really think to their bosses.
Many who have worked for a big corporation will know how suppressing the environment can be. I'm really glad that I live in a time when there are still alternatives.
I'm not a fan of big government, but at some point depending solely on their goodwill seems dangerous.
Of the tech giants, Apple, Amazon, and Microsoft seem to be in the best shape at the moment. They make their money from selling real products and services to real end users.
Amazon is strong but not unbeatable. Walmart in particular, as well as a hypothetical alliance/merger of supermarket chains, are well positioned to break Amazon's dominance of e-commerce. But they will need the ambition and ruthlessness that has served Bezos so well. Few large American corporations still have the vigor and virility of Bezos's Amazon.
Microsoft (full disclosure: my former employer) too is strong but not unbeatable. All of their products are facing tough competition from Apple (in OS and hardware sales), Google (in online services), and multiple others (in business software). Microsoft's wins are hard fought and fair, and the competition never lags too much.
Re Apple: Technology comes and goes, but the iphone was a good one. The troubling thing for me is the amount of cash they are hording. If this is intended to underwrite Applepay as a new bank, then firstly they must allow open access to all technology platforms to use Applepay. You can't have a new dollar that works better at Walmart than Tesco.
Re Google and Facebook: These companies are advertisers. I have no problem with their size or structure at present. I do have a problem with using their easy cross border presence to avoid taxation. You cannot have the biggest revenue generators for advertising paying no tax, when everyone else has to. In the UK Google is headquartered in Ireland so the UK receives no tax from the billions of revenue. This is unsustainable for the country.
My real problems with Google and Facebook is they regard everything about us as their to do what they want with.
Monopolies are very easy to form on the internet, and in the interest of improving everyone's use, we need to try to avoid them. Walled gardens currently trap people into one service and limit the ability to swap between them, similar to "forcing" you to use just one company for your construction work, no matter the price.
I haven't heard a great answer to this problem yet, if one even exists. Is there a way to attempt to prevent these from forming which can be practically implemented?
I'm not sure what the work around is, but for sure we (Americans) do not apply anti-trust / competition law as aggressively as we should. It just seems wrong to me to allow these big companies to buy up smaller companies and then just obliterate them into nothing, not use their technologies, just destroy them by putting the patents into a vault and sue anyone who infringes but not letting anyone benefit either. They all do this to varying degrees.
It's like at a certain market value as a percentage of either global or maybe national wealth, companies should be disallowed from mergers of any kind. And at another level of size, they are required to break themselves up into pieces.
Edit: Maybe disallow hoarding patents. If after X years you're not using a patent, it either auto-expires and is relegated to public domain, or it's compulsory to sell it. Use it or lose it!
On the other hand, you're locked in with your broadband ISP. If I'm going to decry some internet business, I will first point the finger at Comcast and their ilk.
Perhaps they are "too big", but only because they isn't a revolving door between tech companies and the corridors of power (yet?)
And yet it's allowed that like 80% smartphones sold in Europe come with Android versions that basically lock the user into the Google ecosystem, else you can't use the default store.
Is there an obvious flaw to this approach? Why don't I see anyone ever suggesting it?
Proprietary technology has a monopolizing effect in capitalism.
Since technology by definition has an exponential growth rate of efficiency, the monopolizing effect grows with it.
There are of course real life examples from the past.. 
On the other, being so reliant on one company for so much is bound to cause problems at some point.
Regulations are a double edged sword and it's often used to stifle new business and crush possible competition.
It argues that every information networks in the history- telegraph, telephone, radio, cable - follow the pattern of consolidation and disintegration. The new inventions always had the chance to disrupt the old industry, but our modern network - the Internet - might be an exception. Because the Internet is the master switch of all things digitized.
what is the point of discussion, if we nothing concrete, other than rhetoric can be done?
I don't understand why Microsoft gets fined, but Google gets a pass when they used their other products to push Chrome to a dominant position.
The asymmetry between me as a customer and a large organisation with a faceless customer service is just so big that complaints take too much effort to reach someone that could do something about it (if they were willing to own up to problems, which they usually are not).
Having the legal right to get any fee refunded and getting it are just so far removed that I would wager all money handling services make non-trivial amounts of profits from unjust fees because they can exploit this asymmetry.
Sadly, for me as an individual the right decision is almost always to let it go because my time is more expensive.
Never use debit cards when credit cards are accepted, is my general tip.
This sounds like yet another example of why SMS is not a good second factor. The Uber rep's responses seem to ignore the question of how this account was compromised (instead providing suggestions for good password hygiene), so it's not clear to me whether they even think that the SMS PIN is supposed to provide any security at all.
It sucks but it makes sense for the merchant. The bank should return the fees but the exchange rate difference is likely lost.
Also another good reason not to use a debit card for any online transaction. At least with a credit card, no one can take your money while they're settling the dispute.
In 2015 my Uber account was hacked and 1k was taken from my bank account. Uber knew/knows about their users getting hacked and their PR was it's the users fault for using a bad password. Also then I tried to cancel my Uber account via their site but there is no option that lets the user do so only can be done by contacting/waiting for a support person to do so. It took them a few days to cancel my account.
Needless to say I loathe then for this reason followed by all their other horrid behavior!
If the consumer initiates the chargeback, it will be handled in the consumer's currency. Which will result in a fun reconciliation problem for the business.
If the company initiates it will be done in the currency it was originally charged in. Which will give the consumer more or less money depending on exchange rates.
And to be fair to Uber, they don't have control over foreign transaction fees or changing forex rates. This just as well might have worked out in the author's favor. Uber could curry some goodwill by covering the forex losses and transaction fees in this case, especially since it came out to about $20, but God knows that that's not their MO.
The title gives the impression that your credit card can be used for transactions outside of Uber by attackers.
I'd just like to point out that if the currency value changed the other way, he would be refunded more money.
I actually wondered where is that money technically going in this case. I rented a car abroad once, a block of 3000 Euro was put on my card, then when it was released I got less money back than it blocked originally since the currency rate has changed. So someone made money on just blocking that money for a few days, but who? The bank?