And so here we are now.
But the best, still-mostly-hidden feature I've found recently is App Scripting and especially the ability to do a UrlFetch.
I use it as an "API Runner" to run various batch jobs against APIs.
They are solving a problem that doesn't really exist, the challenge is not the last step of a data report, it's the steps involved in the beginning, getting good data in, formatting, joining multiple sources, automation, dealing with junk data, procedures,etc.
I'm not convinced it's better just because it has machine learning on the back end, but if excel would learn how I want my graphs made from how I manually adjust the graphs (adding axis labels and a title, color preferences, never a 3d bar or pie chart), that'd be a nice enhancement. I'm sure there's a setting, but I haven't searched for it.
has anyone worked on something like this ? the big challenge is synchronization - between server and multiple clients - while being able to offload a lot of computations on to the client.
I wonder how is the security built ? if i maliciously change the formulas in my browser.. will the backend datastore still accept the data ?
Worth noting that the region from 1.67 V to 3.33 V is undefined and systems in practice will not behave nicely for signals in this range. A CMOS logic 1 needs to be above 2/3 Vdd to be reliably recognized.
Since they have to be fragmented back down to 1500 for devices that don't support them, however, it's typically only used in closed internal networks, like a SAN. People typically see about a 5% to 10% bump in performance.
Why is that a better idea than just normal text?
So if you are enjoying this article consider purchasing a subscription and supporting more work like this.
 From 2014: http://www.railwaygazette.com/news/infrastructure/single-vie...
He also talks about how it's supposed to be related to Conway's game of life but is actually not (Conway's game of life is a 2D cellular automaton, while rule 30 is 1D).
Here's a few layers of cellular automata (anneal, life and brian) combined with some error diffusion dithered heat flow, for your enjoyment (try clicking and dragging and spinning the mouse wheel):
Despite some controversies (are there enough ticket machines, enough toilets, enough train services, and so on...) Cambridge North is going to become a popular station.
The Sumerian columns with geometric mosaics are pretty cool too.
I did not know such patterns were also present in antique art. I guess one learns a cool thing every day !
Instead it's the same endless blabber about the damn automata we have heard for the last 20 years.
SHA-2 512/256 is much faster than SHA-3 and supported by more libraries.
The notion that by recommending SHA-2 512/256 you're setting people up to use prefix MAC SHA-2 512 or SHA-2 256 is kind of silly. You could similarly argue that by telling people to use SHA-3, you're risking that they "fall back" to SHA-2. Either way, you're calling for a specific hash.
The reality is that some people like SHA-3 because it's interesting and because its primitives can be used for a variety of non-hashing applications. That's true! Nobody is saying SHA-3 shouldn't exist. They're just saying: there's no good reason to use it in a modern application.
(If you're not clear on length-extension attacks: they're the reason we use HMAC. HMAC-SHA2 isn't vulnerable to length extension attacks; neither is HMAC-SHA1 or HMAC-MD5 --- both of which, fun fact, can't currently be attacked, despite the weakness of their underlying hash. But if you use SHA-2 512/256, you don't have to use HMAC.)
I think the point the OP makes is that a suggestion like "use SHA-3" is a simple, succinct, and not-unacceptable answer to a question like "what cryptographic hash function should I use?", giving a safe albeit conservative, but instantly graspable answer without having to go into additional detail -- other than the obligatory mention to that general-purpose hash functions aren't by themselves appropriate for key derivation ("password hashing").
The alternative view -- that SHA-512/256 (which suffers from its naming), or the longer-but-less-truncated-state SHA-384 is faster, more studied, more widely supported -- is a more nuanced recommendation, but then you have to explain why you don't mean SHA-256 or SHA-512. The innovation of libraries like nacl and libsodium was to the user of crypto from having to be a crypto expert themselves, and once you have to explain which of the SHA-2 hashes specified in FIPS PUB 180-4 you should and shouldn't use, we're not really any better than in the footgun days.
A much more general argument is that most people should be using cryptographic frameworks (e.g., which implement TLS), and Adam Langley's thoughts about whether or not SHA-2 should be skipped should be aimed at people who are creating those cryptographic frameworks.
But if we are giving advice to random application programmers, they shouldn't be trying to pick cryptographic algorithms to begin with, and the question of whether you should be using SHA-2(key || data) is the sort of thing where the Zen master would be hitting the student with a cluestick for asking the wrong question to begin with.
I mostly use hashes as part of signing/verifying small messages, say an 80 byte JWT, a Blockchain transaction, a certificate, an TLS/SSH packet, etc. Besides hashing large files (which I rarely do), I don't see where I would reach asymptotic performance, or even use tree-hashing.
SHA3's block size is 200 bytes, KangarooTwelve's is, if I'm not mistaken, 8192? I'm more worried about not even filling the first block :)
That's a strawman. It's NOT useful.
Is it simply a matter of scale?
Edit: I thought it was illegal in the USA to pay for body parts, but there is an exception for plasma (and maybe the rest of the blood?). But it's already predatory http://www.nytimes.com/2009/12/06/business/06plasma.html and this new business only makes it look worse.
Sounds like the most stereotypical trope of snake oil.
Especially given the time and money involved in having to get regular transfusions, I'd think the practitioners would be better off using the time to spend an extra hour in the gym.
another recent addition:https://www.amazon.com/Mathematical-Tools-Physics-Dover-Book...
"purer" than the rest:https://www.amazon.com/Mathematical-Physics-Chicago-Lectures...
"more fun" than the rest:https://www.amazon.com/Geometry-Physics-Introduction-Theodor...
this pair is probably the most rigorous and difficult of the bunch (at least from what i remember):
and the grand-daddy of them all
And this is exactly what Michael Stone and Paul Goldbart did in their book as well, albeit their book is more dense/stricter and cover more advanced topics like differential geometry.
 - https://archive.org/details/ZeldovichMyskisElementsOfApplied...
Another similar read is Vidar Hokstad's blog series - Writing a compiler in Ruby  which was first submitted here 9 years ago! 
There are so many great books out there on how to create a lisp, or a typical mutable object-oriented language, but with one notable exception (that's unfinished), there are no approachable online tutorials/books out there that I've found on building compilers or interpreters for functional languages. Only academic papers, a textbook, and one or two books from 20-30 years ago.
Pierce's Types and Programming Languages is a great textbook that covers all of this material, but from an extremely detailed and formal academic perspective. It would be great to see more approachable tutorials or short books online to complement Pierce's text.
I've started writing my own in-depth tutorial on this subject using Scala as the implementation language, but would love to see other tutorials/books as well.
Of course in addition to executing an AST you can also generate CODE for it...
Also, to enforce an extra layer of learning, I am writing the interpreter in Go. (book uses Java and C)
I feel that dynamic variable lookup is a mistake, though -- it's just so painful to have to wait until runtime to discover you've made a typo. Is supporting mutual recursion really important enough to offset this pain?
2000 lines of Java code? I guess 400 of the lines of are just getFoo() setFoo() getBar() setBar() and so on and so on...
At the beginning i thought "Why is this tutorial written in Java? Yuck!". Then, in a flash of enlightenment i saw this was a perfect choice: The author, bless him, is showing us how to write an interpreter in Java, so after one is finished, one can stop using Java and start using this brand new language.
Jokes aside, i am VERY MUCH looking forward and EXCITED to see that the next chapter is on implementing a bytecode virtual machine. So let's hope the author completes the book soon!!
Then I will finally create my ambitious project for an object oriented COBOL, which will be called "ADD 1 TO COBOL GIVING COBOL" (drum fill please...)
Companies use OneLogin so employees have 1 service to enter their credentials and can then use federated access to apps like Google, Office 365, Salesforce, etc without signing in again, most often connected via SAML which uses public/private keys. The identity provider can also be external, so for example users can sign-in via the OneLogin UI but the username/password are actually authenticated against Office 365 Active Directory instead.
In addition, customers are unable to do any forensic analysis to determine how their data was affected.
> OneLogins blog post includes no other details, aside from a reference to the companys compliance page.
The only option is to hope they provide customers with relevant information in a "timely manner", but that could be months for an organization with thousands of customers.
Our review has shown that a threat actor obtained access to a set of AWS keys and used them to access the AWS API from an intermediate host with another, smaller service provider in the US. Evidence shows the attack started on May 31, 2017 around 2 am PST. Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance. OneLogin staff was alerted of unusual database activity around 9 am PST and within minutes shut down the affected instance as well as the AWS keys that were used to create it.
Credit where credit is due, that's a pretty quick response time for data breaches, which are normally quoted as being discovered in an average of 30 or so days.
However the fact people's information can be decrypted from this breach is awful. Sounds a lot like the private key to decrypt this information was stored alongside the data in the database... whoops! That's like storing the clear text password. Let's hope the decrypted information contains strongly hashed passwords, but I'm not holding my breath.
Isn't that at least somewhat analogous to using the same username and password on every site?
Better services (1password for example) are specifically designed to never know your master password/key to avoid this very situation.
As an aside, this is why I think that any effort of a prospective employee to divine the value of a stock option package is likely in vain. Without a detailed accounting of the ins and outs of the preferred stock that is senior to your common shares, it is nigh impossible to tell how much the common shares (and options thereon) are worth.
These authors have developed a system to tease out the optionality using standard financial methods (using methods like Black-Scholes, for example), which can give us all a better understanding of the true worth of these companies. Far overdue in my opinion.
However, let's look at another example. Take Nutanix (Series E valuation at 2 B, pre-IPO at 2.1 B). This model values it at 0.8 B on their table, almost a third of the IPO price.
There is no explanation forthcoming in this article as to why that's the case. This makes it seem like the Square example was cherry-picked.
I picked NTNX at random, so I don't know if it's the one exception. I'm not going to exhaustively check every result, however. I expect them to do that for me and not sell me a story without pointing out the terrible exceptions.
There's more money than there are good deals in Silicon Valley, so later stage investors are forced to offer more money for less equity in order to beat other term sheets. This ends up looking like sky-high valuations, since investors that offer fair-market-valuations are unlikely to get picked. Founders naturally gravitate towards minimizing dilution.
How much is something really worth. Well how much is the next person willing to pay, that's really what this is all about. I've been down the road of VCs, exits etc... before and to be honest most of it is just fluff people make up, loop holes in the way things are valued, forget basic business and accounting they literally are making this up as they go.
Most VCs I feel have a detrimental effect on startups, the only thing a lot of them provide is money, which isn't always what a startup needs. It doesn't matter to the VCs that they are mostly wrong, they just have to be right once.
The question we need to ask here is what happens when it all crumbles down, due to the fact that all this is going on. How valuable something is ultimately depends on how many lives it improves. Whether something is valuable or not is measured by the amount of pain inflicted on society if the startup didn't exist, and ultimately if something is not needed, it wont survive anyway. The market is cruel like that, and having VC money shields entrepreneurs away from that crucial factor. All this fluffed up valuation has nothing to do with the survival of a business anyway.
So if this article is to be trusted it is overvalued by 4.4 B $. that is another square.
AIUI, the common share FMV they list should be comparable to the 409A common share valuations you may have gotten. Roughly speaking, assuming the funding round in the table is close in time to the 409A valuation.
10 apples are worth $100. 90 oranges are not therefore worth $900.
We have drifted far from the vision of people building up computing.
There aren't just a handful of Ad networks, there are thousands if not millions out there. On top of that, they utilize each other to push out ads in a horrid rat king like incestuous jumble. Any payments to avoid ads served by these companies would require compensating all of these companies, the end result predictably would be movie studio accounting that leaves the content provider with nothing in the end.
This setup does nothing to address the privacy issues people have with companies like Google tracking their comings and goings. Google is still at the heart of this system and still knows everything about you. To get any benefit from this system actually requires you to embrace Google. People want to maintain their privacy, they don't want to login to Google to get rid of ads.
It's easy to envision a system utilizing a crypto currency and a digital wallet held by your browser that you fill occasionally and that prompts you to pay a site similar to the manner in which Location Services work simply based on a meta tag a site provider puts in their page head containing their wallet and request pay amount and schedule. It's impossible to imagine Google, Apple, Facebook, or anyone who wants in your pants to allow themselves to be cut out of a revenue stream by such a system. Companies like this are double dipping by charging everyone else to be the broker and also by being the service provider being paid.
I honestly don't know if an ad free web is allowable. It's technically possible but everyone who isn't the content creator is going to do everything they can to stop it form happening.
That's okay, uBlock Origin with its whitelist I compiled with sites I know won't violate my browser with ads is still available in my country.
Content creators need to be able to charge different amounts for different quality content.
In depth, well researched reporting needs to be able to earn more than a buzzfeed article. That's not possible with a flat "per-eyeball" cost, where the revenue to the content creator is uncorrelated with the cost to create or the value/quality of the content.
I wish it weren't google (who also already owns advertising), but someone large is the only one who can make it happen.
A model like this is necessary to support quality content online.
If this was more like Youtube Red, I'd be all over this. I would love to pay to remove all google ads. I get that uBlock exists, but I want to support the sites I use.
Unrelated: the sidenav thing is empty? WTF?
Google Contributor was a program run by Google that allowed users in the Google Network of content sites to view the websites without any advertisements that are administered, sorted, and maintained by Google.
The program started with prominent websites, like The Onion and Mashable among others, to test this service. After November 2015, the program opened up to any publisher who displayed ads on their websites through Google AdSense without requiring any sign-on from publishers.
Since November 2015, the program was available for everyone in the United States. Google Contributor stopped accepting new registrations after December 2016 in preparation for a new version launch in early 2017. On January 17th, Google Contributor was shut down. As of January 17, 2017 8:40 AM no replacement had been announced.
Hypothetical question: If I were allowed to bid on my own ad impressions - and if I won an auction, no ad would be shown - how much would it cost a month for me to see no adverts? (I realize this is heavily dependent upon the type of sites that are involved, so I guess take the average HN user as an example).
Then buy subscription to sites you like.
Dont give google a percentage of everything.
It'll be launching in literally a week or two... it's very simple to integrate and comes with it's own Wordpress plugin (and instructions to integrate your own CMS).
What advice do people have about this per article payment space; I have a load of ideas I want to try so maybe while Google concentrate on ads I'll be able to look at various optional payment models.
Initially I want to just charge a flat 5% + whatever Stripe fees you use to top up your wallet, but I'm concerned I'll get a lot of noise/scaling issues if I don't charge a monthly fee? Thoughts?
I've always wanted a way to simply specify the minimum bid (by bidding myself) for my attention into the ad exchange. This peer-reviewed pricing seems like it adds lots of cognitive overhead for me?
Thanks, as always, for understanding privacy big G.
I pay publishers I like for their content (buying newspapers, subscriptions, etc) and I don't get why Google should be the middleman in this.
This is incredibly cheeky.
You WON'T BELIEVE YOUR EYE what GOOGLE will do NEXT! - $0.5
But, most importantly, Flattr guarantees the anonymity of consumers' transactions. So the big G won't have a log of what websites you paid to access.
Let's actually sit down and think about this. If this happened, there would be some big changes to the web. (Note- this is a quick response. I should write a real paper about this)
First of all, now that people would be paying for no ads, websites will overload their sites with ads, because Google will have the solution that "everyone is choosing anyway". It would make it "OK" to have tons of ads on your site, because there's a solution.
Then, your web experience becomes terrible. For a "small fee", you can keep a "nice" experience- one that used to, and always should, be free. However, if you don't give Google your money, then your web experience is going to be so filled with ads that content will take forever to load. And even if it does load, it'll take 30 minutes to read an article, because every 30 seconds you'll have your regular add popups. Then, you'll have the sidebar ads that follow you. Or the mobile ones that get in your way as you scroll. You won't be able to view content, because ads will have taken over even more than they already have.
Now, in this terrible future, what about those that can't afford Google's "small fee"? They'll be condemned to the ad version of the entire web- one that doesn't load properly, that people have started to discard. The true "web" will be the one where you pay to view. These people won't have access. And, if they aren't able to pay the small fee, then most likely they're accessing the internet from a slow connection. Maybe it's a library that can't afford the fee either. Or maybe it's in the home, and they can only afford small internet speeds and second hand computers. Not everyone has the money to buy a brand spanking new macbook /pro/air from Apple.
So now, 5 years down the road, there's two versions of the web. The one that Google controls and tracks 100% (oh yeah, we didn't even get to that yet), and the one that is so ruined with ads, that the people make a decision. A big one. Let's just get rid of the ad version of the web. You can't use it anyway, so there's no use. The only way to go- is to give your monthly payment to Google, so that you can access the web. Now, you gotta pay to view the web at all. The free web is gone. Google took it away.
There's also Google, sitting on their even exponential growth pile of money, tracking every web user. Sure, there may be other competing services that let you "into" the web, but they're also gunna track you. No doubt about that.
There's so much more that I haven't even said. How will websites determine how much a "view" is? What about requests that are half loaded? How will you know how much it costs to view a webpage? Not all web content is created equal. Definitely not.
There's so many more things. So many more.
We can't let this happen.
Of course, if i click on the links to the actual enrolled sites i get 'this service is not available in your country' so it may be there.
>How it worksYou load your pass with $5. Each time you visit a page without ads, a per-page fee is deducted from your pass to pay the creators of the website, after a small portion is kept by Google to cover the cost of running the service. The price per page is set by the creator of the site. You will be informed in advance if a site creator changes their price per page. Contributor is easy to update: change settings and add sites or remove them from your pass at any time.
I love the sound of this, I just do not like the idea that it is Google doing it. It feels ... dirty somehow. A third party doing this I have no problem with.
Whoever came up with that bright idea?
For me, this is what an ideal web would look like. My ad-blocker would barely get a workout, and I'd happily pay for bundled (not pay-walled, bundled, downloadable) content as I did for many years with magazines.
No-one wants high quality content to disappear, but advertising and web paywalls are not the only options.
I really hope this works out. As much as I dislike advertising...and Google...something has to happen. The "ad blocker" - "ad blocker blocker" arms race is patently stupid. There has to be a way to get money to content providers so they can opt out of the madness. Google will still be able to provide them with all the sweet sweet surveillance data that they thrive on.
One problem at a time, I guess.
Also shutting down one miner just makes the remaining miners more profitable in the long term. In the short term though ~1 week it has the effect of reducing the rate at which Bitcoins are generated until the difficulty re-target is reached.
The community is pushing to enable larger blocks among other functionalities that would cult down on the fees that miners (read: Chinese miners) collect.
Obviously the miners won't agree on that, but the network agreement depends on what the Bitcoin processing power decides, not on what the majority of the users do. At some point either the miners join, the users give up, or the chain forks in two (obviously with wildly different valuations).
I'm not familiar with People.cn, but my first reaction with the headline is that the piece of news has second intentions.
This is not true. Bitcoin price swings like crazy.
OR do a 3 fund portfolio like: https://www.bogleheads.org/wiki/Three-fund_portfolio
OR buy a target retirement fund from Vanguard: https://investor.vanguard.com/search/?query=Vanguard%20targe...
OR fill out a risk profile on Wealthfront/Betterment and invest there.
Bottom line is pick an approach and don't ever touch a damn thing.
I believe this advice can fit on a single web page and doesn't need an app, a community or otherwise.
Unless you're extremely wealthy, any money you are able to save (outside of retirement money) is probably money you are going to want to use for something to improve your life in the semi-near future. Buying a house or car (or just a better one) for example.
With that assumption in place, under what circumstances does investing in index funds make any sense whatsoever? The entire market crashes on occasion due to herd mentality, and yet even given that level of risk index funds still take many years to appreciate in value significantly. It seems like an absolutely terrible place to put money that isn't specifically intended for retirement or something like a 529 plan.
Also it is good to point out leaderboard means nothing, it is totally pointless.
This is like playing poker with infinite fantasy stack.
If you are good investor you will never be on this leaderboard :)
EDIT: I've edited the code a bit so that it doesn't look like PERL or J itself https://gist.github.com/piotrklibert/4d32c8cc6fcf20643a257a2...
>"Indeed, over the past year, Mr. Sietsema, the senior critic at Eater NY, has watched with mild schadenfreude but greater alarm as his neighborhood has undergone yet another transformation from a famed retail corridor whose commercial rents and exclusivity rivaled Rodeo Drive in Beverly Hills, Calif"
I'm not fan of what happened to Bleecker Street but I find it amusing that an employee of Eater which promotes conspicuous consumption in the food arena(celebrity chefs, "where to eat now", etc) has "schadenfreude" for conspicuous consumption in the high-end retail fashion world. There's a slight bit of hypocrisy in that. Trendy restaurants and trendy fashion boutiques seem to come to neighborhoods in lockstep. Bridge and tunnel crowds usually come into the neighborhood to both shop and eat. The metamorphosis of Bleecker Street seemed to begin with Magnolia Bakery and cupcakes in the early 2000s as mentioned in the article. And Eater has certainly done its part over the years in promoting the cult of Magnolia Bakery including pieces by Mr. Sietsema himself.
For some reason foodie culture seems to view its version of rampant consumerism as being a more noble pursuit.
Also see the following from Eater:
Living so close to NYC all my life, growing up I'd often visit the Village; and Bleecker street was always on the list of destinations without hesitation. Admittedly, this was decades ago. But, when i visited the Village earlier this year, it was quite sad. Bleecker and other (normally more active) streets were so desolate, and their buzzy mixture of bohemia and eccentric stylishness was lacking. I felt like i was visiting a Disney-ified version of some way-past-its-prime neighborhood. Sometimes it feels like things have too much shine, and not enough real, earthy truth.
I'm looking forward to a cool off in the CRE space and a slightly smaller one in the residential space.
When the rent went up 6-10 fold, they could refinance and borrow against the new inflated equity to secure more investments. Renting out at a substantially lower value would negatively impact this financing.
It's a better deal for them to sit empty at a potential rent of 35K/mo than it is to lower the rent and have to repay/refinance loans.
landlords are happy to let leases expire, jack up the rent and look for the next 10 yr tenant. usually a 2-3 % annual escalation is added to the lease.
Edit: adding quote from the original article -
> At a time when shoppers are buying online and fashion brands across the industry are hurting, the challenging business environment makes it less interesting to do vanity locations,
Same phenomena seems to be happening to prime retail locations globally - take a drive down Oxford St in Sydney for example. Interested in what the solution is to clear the market on all this CRE, you can see a lot of startup/coworking spaces taking advantage of the vacancies in these initial stages.
Near the outlet the iceburgs catch on the bed of the lagoon and tumble as they melt. There's a glorious variety of colours as the ice as the freshly exposed dark glacier ice is exposed - like in the attached article - before the ice surface melts and it returns to a white colour.
Mind boggling. A chunk of ice the size of Manhattan rolling over.
Also, there is no reference, so you can't tell how big the structure really is.
Why are we allowing pics now?
We hope this will take a small step towards correcting this problem.
This is a fantastic start. Yes, angel.co/clear was complicated. I hope you have better luck driving adoption.
What you are charting at the top is the company exit value. But the UI controls enter info tied to the employee. You should chart the employee's exit value.
You should also put the "or not*" controls as totally optional toggles as an exploratory UI. I.e. the user can toggle "preference" and see what it does to their value.
I joined Zenefits when everyone had dollar signs in their eyes. It felt like the roaring 20s (or at least the accounts I've heard of them).
How are they doing nowadays?
I always say don't compromise on salary for equity. Compromise for the experience, for an entrance into the field, because the chick at the counter was digging you, but not because of some payout you think you'll get in the future.
Suppose you regard tldroptions.io's probability distributions and outcomes as correct, and the only thing you care about is maximizing the long-term rate of goal of your capital. Then the Kelly Criterion (https://en.wikipedia.org/wiki/Kelly_criterion , a.k.a. Fortune's Formula) says that you should try to maximize the geometric mean of your capital, which amounts to maximizing the expected logarithm of your capital.
To make this concrete, suppose you are choosing between two options:
- (Startup): Working at series C+ start-up for three years, where you receive 1% equity and $100k/yr salary, and have a 20% chance of getting ~$60 million in 3 years (according to tldroptions.io)
- (AmaGooFaceSoft): Working at AmaGooFaceSoft for three years, where you receive $300k/yr total comp (according to patio11)
For simplicity, I will ignore taxes and the time value of money. All monetary amounts below are in millions of dollars. If you have no money in the bank to start with, the geometric means of your alternatives after 3 years are:
- (Startup): exp(.2 log[60+0.3 ] + .8 log[0.3]) = $0.86 million
- (AmaGooFaceSoft) = $0.9 million
In this case, AmaGooFaceSoft is slightly better.
On the other hand, suppose you already have $1 million. After 3 years you will still have the $1 million, plus your salary and whatever money you get from your equity. Here the geometric means are:
- (Startup) exp(0.2 log[6+1+0.3] + .8 log[1 + 0.3]) = $2.8 million
- (AmaGooFaceSoft): 1+0.9 = $1.9 million
In this case, it's better to join the start-up.
The base salary matters a lot. If you have no money in the bank, but you get $150k year at the startup instead of $100k, then the geometric mean of the Startup option after 3 years is better than that of AmaGooFaceSoft:
- (Startup): exp(.2 log[60+0.45 ] + .8 log[0.45]) = $1.2 million
Two things a lot of startup employees are unaware of that are worth highlighting: they actually have to buy their options, which eats into returns, and that if they leave the company they have a limited window (30 days, typically) in which to do so. In would behoove them to save/plan for this fact.
You could owe taxes on money you never saw. Under the AMT rules, if you exercise options at a discounted price, you have to consider the discount as "income". If several years later, when you're ready to sell and the stock is below what you paid for it, you'll still owe the taxes on your discounted price.
Ask your tax person for the details before engaging in any stock option purchase.
Caveat: this doesn't work in a "down exit" less than the previous round's value.
As @bethcodes says, this is not a calculator, because while these two numbers will get you within a factor of 2, knowing the details will help you get even closer.
(Disclosure: I built a calculator to help you do that: http://www.optionvalue.io/calculator/)
tl;dr in one word: "Beware"
This could keep slider at $0.00 for a longer (further on the right side).
The company I work for is about to get major investment and they are transitioning from an LLC. Only the 3 founders have equity currently. As engineers we have no idea how much to ask for. Because we have already been working without equity. How do we account for the years we've worked? Or that our salaries aren't that great right now.
Then clicks the down arrow again. Negative numbers ensue.
True story, except I'm not usually a QA engineer.
> Instead of trying to get the right answer, we set out to build a tool that could get an answer.
I'm curious though, from where did you get the numbers about the likelihood of an exit? I thought it was pretty interesting that a Series C is statistically less likely to make you money than a Series A, according to this.
Series A 65%
Series B 68%
Series C+ 71%
If so I'm a little surprised that seed and series C+ companies have approximately the same chances never exiting.
The expected value of working at an early startup gets overestimated, by a lot. If you're optimizing your career, either make the most you can at an established company, or start a startup.
Or... work at an enlightened startup, that understands the state of affairs, and offers really generous lifestyle advantages - i.e. go work remotely for a couple months if you want, otherwise they are just exploiting misinformed young people and their founders likely have some ego issues.
Well, then it's misleading.
Between AMT, capital gains / income tax, there's potential for a huge chunk of what you might earn to be removed.
Edit: the assumption of 0.01% is also absurd.
A simplified calculator should include reasonable defaults. This is like a mortgage calculator called tldrCanYouAffordAHouse.com that assumes 25% interest rates and doesn't disclose that.
Reality is probably a little fuzzier than that, though.
As an example - say you traded around 60k a year in cash in return for higher equity. In six years you're looking at around half a mil which means with, say .2 in a Series A, you're looking at a pretty big exit (i.e 700M) before you even have a chance of breaking even. This is obviously a terrible choice (i.e care for 20 dollars or a chance at 20 dollars)
To make it 'worth it', from an 'expected value' POV you'd need to make 2-3 million on around a 3.5B exit which are, of course, exceedingly rare.
You also have to be careful about things like options windows on exit, tricky term sheets with liquidation preferences for preferred shares, etc.
I'm going through this process now and it's shocking how most people really have no idea how this stuff works. Even with recruiters/HR/CTOs/etc who deal with this stuff day to day.
Edit: Thought about this a little deeper and it is possible with a lot of companies exiting prior to the Series C, but suspect the data set of Series C+ companies may just be too small?
Answer is $0
Although I know the E=mc formulation makes more sense from an "E=(pc)+(mc)" point of view, the author had a kind of striking way to put their point. It's as if the war managed to intertwine itself in culture so strongly it twisted the perception of the entire general public on issues of fundamental physics!
The author may wish to check  for historical reference.
And 40% for Bitcoin Unlimited (Emergent Consensus)
I recall they also spoke on some security aspects of the system's design, like how the cracked passwords never touched disk and had to be destroyed as soon as possible, etc.
I wish I could find a recording or a writeup on this somewhere, as I thought it was a pretty cool (and effective) approach.
The FCC or corresponding body elsewhere should mandate that phone networks and phones support a secure messenging protocol which could guarantee that a message could be sent to a phone number and only be received by that device.
Password-only authentication is like locks on luggage, even with best practices.
But those are bad comparisons. A key and lock is an asynchronous single use authentication+authorization mechanism. Passwords are just the authentication part, so trying to replace these just requires we have a secure way to authenticate ourselves.
We have the benefit that we are using digital systems, so our authentication can be digital, too. We can also rely on multiple factors to improve how authentic this process is. Biometrics, digital files, access to other accounts and networks, offline code generators, and personal information all provide lots of authentication data and multiply the effort needed to defeat the system. By combining all these factors, we can create a new digital key that is far more difficult to defeat than old methods by themselves, and ultimately is more flexible because it can be made up of any of these things.
The problem mainly seems to be that we live in a world of different locks, and most locks don't accept this particular kind of digital key. We've hacked around this problem and made some attempts at more compatible solutions, but they really fall short of their true potential.
In the future, you should simply be able to use any system and know that it will authenticate you in a way that can't be copied or cracked. Today that just isn't the case. So for now, maybe we should move the goal posts. We can keep making our keys more unwieldy, but we can also get more guard dogs.
The guard dogs need to exist not only to protect the locks, but the keys, too. If you go to unlock a door, a thief can knock you out and steal your key. Each aspect of our digital access needs guard dogs. We can no longer accept insecure communication methods, nor insecure computing platforms, to exchange our authentication. I think the real challenge going forward is rethinking how we process data altogether.
When things go wrong, you either move on or start fixing things, and that perpetuates your job (hopefully). If things take more time, it is their time. I come in at 8am, leave at 4pm. I don't take my laptop home. I don't work from home. I see their inefficiencies as opportunities for me to spend time on things like learning and experimenting.
But that's me. I get paid enough, I don't need or want to go up the "career ladder". Others may have loftier goals.
1. Being stuck in problem solving 2. Time pressure 3. Bad code quality and coding practice 4. Under-performing colleague 5. Feel inadequate with work 6. Mundane or repetitive task 7. Unexplained broken code 8. Bad decision making 9. Imposed limitation on development 10. Personal issues not work related
- When interviewing for a new role, interview the people you will work with and ask questions about the company. Interview them. Discovering during interviews that an employer or co-workers aren't a good match for you is the best place to figure that out.
- Realize and accept that most software doesn't need to be perfect. There is an acceptable level of quality and then after that it doesn't matter to the business. Sure its likely someone else's money but that contributes to unhappiness. When production bugs happen, tackle them like a professional and save the "I told you so's".
- Same can be said for large waterfall driven software processes. They tend (not all but many in my experience) to have a lot of feature bloat of things people want but never actually use. This could be borne out of politics or appeasing people, misrepresented requirements or the business changes faster than software delivery. Recognize if you work in a shop that does this and come to terms with it or suggest gathering metrics on usage of your system as part of your requirements process.
- A lot of the reasons stem from existing software and issues it has. You might think to steer clear of old code and work at a startup or greenfield project where everything is new. There is a certain satisfaction, maybe enlightenment is a closer word, when you figure out unexplained/broken software and fix it. Have you felt that? You'd be amazed what little fixes to do quality of life for the people using the software. "I am one with the code and the code is one with me".
> software developers are a slightly happy population
> the vast majority of the causes of unhappiness are of external type. Since external causes may be easier to influencethan internal causes, and since influencing them will impact several developers rather than only one at a time, this suggests that there are plenty of opportunities to improve conditions for developers in practice.
Also noteworthy: this study skewed highly male (94% v 5%). This may be a source of uncertainty.
I guess this puts the emphasis on talking about problems over hiding in a corner and working through them into some perspective. But personally, I find programming under circumstances where I don't get to "spin my wheels" from time to time pretty frustrating.
1. Being stuck in problem solving2. Time pressure3. Bad code quality and coding practice
You won`t be stuck (often) in problem solving if you have good management and "no question is dumb" atmosphere in the team.You won`t have (often) time pressure if the project is managed correctly.You won`t have bad code quality if the management choose to pay for several very good devs/architects and did not impose constant time pressure. Etc, etc.
My number one reason to losing motivation on work (and subsequently quitting if that does not improve) is a lack of good leadership. I can sustain anything else (if it is not constant) if managers are true leaders. Sadly, they are in a very small minority from my experience so far :(
- you would typically store the private key on a disk-encrypted app-whitelisted iphone, so that the computer you are browsing with, whether yours or a public machine, is never involved in the authentication. Effectively this achieves 2FA. And you don't care if the machine you browse with is compromised.
- this does not rely on a third party, it is purely an authentication mechanism. So it removes the risk of that third party tracking you, selling or leaking your data.
- it should be fairly practical and easy to use, does not rely on installing anything on the machine you browse with
- the website you authenticate to can be hacked, it stores no useful information that can be used by another domain
I am not sure Gibson has the audience in the sillicon valley required for this to become mainstream. But the principle makes a lot of sense to me. Of course your are still exposed to the password protecting your private key being stolen, which gives the attacker access to everything, but this is no different from a password manager. Except that unlike a password manager, you do not need to enter that master password on the machine you are browsing with, which considerably reduces the risk.
They are building a hole ecosystem with all kinds of capability and additional security that SQRL simply can not provide. Most important being anti-phishing protection. They are working on mechanism that would allow you to use the phone as a authenticator even when working on your desktop, this is part of the upcoming version of the standard.
They are already very popular and in a lot of hardware, they are working with w3c to standardize, part of the Web Authentication group.
Some people wrongly assume that UAF is only about but it could also be somebody entering a password or pin. The main attraction is that it allows for independent evolution of authenticaters without the server having to know or care (he can care if he likes). This will be a game changer.
1. Open login page https://www.privat24.ua on the computer, you'll see a QR code,
2. Take your phone, open bank's official Privat24 app,
3. Within the app select "Scan QR code",
4. Upon scanning, the page on the computer is reloaded and you are presented with the dashboard.
Very convenient. I wish more services across the internet would provide the same means to log in (although, of course not every one service can afford having a dedicated mobile app).
I just hope that people secure their phones. I recently got a new Android phone and it has no password and no encryption by default, so I assume most people leave it like that.
If you get access to my phone you can access 10+ years of pictures, email, bank account, and all the services I use.
Besides this, I love it. Can it be implemented in a website, already, or is it just an idea..?
I was quite annoyed by it because I had a similar idea that addressed some of this scheme's weaknesses that I was developing before this was released (half baked!), and the negative attention this brought wasn't going to create a warm welcome for my concept, so I dropped it.
How will it work on mobile only world? Can this also work on iOS and Chrome OS?
It hasn't gained traction since, so it seems unlikely it ever will.
Do have a look at https://firstname.lastname@example.org/passwords-bad-ux-security...
P.s. I work for AuthMe.
They are worried that they won't be able to move in time if the market drops because they are sitting on such huge sums. Any market panic is only going to compound the transaction volume problem, leading to greater panic like people experience when there's a bank run.
It's kind of a self-fulfilling prophecy until the block size / transaction volume issue is resolved, but the community's inability to address this problem could be the death of it.
They are your competitor and who will prevent a disgruntle employee or a hacker to steel your successful trading strategy?
Just buy some data from eBay, you can get 20 years of historical stock market data for less than $100 and you can test any trading strategy or idea imaginable, including trend following, buy and hold ETFs, etc...
The barrier of entry is pretty low and you can develop a great lifestyle business with no customers, employees and investors around that...
But if so much wealth is hidden and not tied to each nation's fate, those with the most power and influence lose the connection to their compatriots.
This has led to the brittleness of the current international order. Since the fall of the USSR, national treaties no longer bottom out at the self-interest of each countries' residents. We are trusting a legal framework that has had its foundation severely degraded over the past 30 years.
If an accountant helps a billionaire dodge taxes he can earn millions of dollars in fees. If a tax inspector exposes the scam he might get a salary bonus worth a few thousand dollars.
I would like to see tax inspectors get paid by commission. If the tax inspector exposes a scam and the government collects $50 million, the inspector gets $2 million. (And to avoid overzealous inspectors, if a tax audit finds no impropriety the inspector forfeits $20,000)
Sure, until recently you could hide your income in Switzerland, but it was a clear crime -- there wasn't legal grey area.
And yes yes, there are other ways to try to avoid taxation (deductions etc) but this is a really big one that the US doesn't have to worry about.
Astrid Lindgren, who you might now as the author of Karlsson-on-the-Roof and Pippi Longstocking children books, once was unlucky enough that she had to pay 102% of her income in taxes (yes, paying more than she had earned this year). Things got better since then, but I would totally understand the desire to evade arbitrarily-imposed taxes.
Why do I get paid and automatically get taxed during this process, but a footballer doesn't?
It is bullshit
How would one define "Capital" in this case? What does this mean?
The first was a universal transaction tax. Every transaction that passes through the banking system has an associated transaction tax rated at 1 cent per 100 dollars collected by the banks, remitted directly to the government and taken out of the transaction. All monies transferred out of a country would have the originating country collect and retain the tax.
The interesting aspect of this is that all funds that end up in the banking system, irrespective of legality of source would participate in the transaction tax. And based on the figures of over 20 years ago of only 1 in every 1000 dollars passing through the banking system was from legal enterprises, that's a lot of additional taxed wealth.
The other aspect of that discussion was that all other taxes, government charges (duties etc) would be dispensed with. It was unpalatable because it treated all as equal and various interest groups don't like that.
The second one was charging a flat tax (income) on every entity with the only allowable deduction being salaries and wages. This is based on gross income not net income. One aspect would be to force companies to run much more efficiently than they would otherwise do so. Again treating every entity equally would not be politically palatable.
Of course, there will those who would still try to game the system, them you cannot get rid of.
And I'm not here to argue how terrible C++ is (I've done enough of that elsewhere), but only that "behavioral" arguments cut both ways, and are usually little more than an ad hominem attack and/or some good old marketing tricks rebranded as "behavioral science."
I mean, maybe the actual details of the case for switching are slightly beyond the scope the author intended to write about, but simply starting with an assertion that "modern = better, therefore people should have already switched if they were rational" and treating that as self-evident strikes me as highly arrogant and logically fallacious.
The first statement is arguable and the parenthetical statement is false. C99/C11 have language features that C++ hasn't adopted, which makes it somewhat obnoxious to support C++ from C codebases that use them. One example is the "static" keyword used in array parameters.
- Learning to use a new system effectively takes time and energy. Are you 100% sure that the benefit of the new system is great enough to offset these costs?
- Hybrid codebases have higher maintenance costs than homogeneous ones. If Kiwi mixes with Strawberry seamlessly, maybe that burden is lowerbut the more true that is, the harder it is to believe that moving to Kiwi confers a large advantage.
In this specific case, I could believe that C++ by seasoned C++ users leads to demonstrable net gains. But it's easier for me to believe that switching to Rust or Haskell would confer higher gains, corresponding to the higher risk and higher cost of switching. So I think it's not that people are irrational about avoiding C++. I think that C++ is in an awkward place on the hill between C and things better than C++. If you need something better, you will go to something with better tradeoffs than C++; if you don't need something better, you just live with your existing C codebase.
But according to OP, it's not C++ that's irrational, it's the programmers who don't want to use C++.
- Show them how classes makes things easier (automatic object management, some operator overloading, etc)
- Show how the STL makes most things easier (arrays, maps, etc)
How not to convince them:
- Show uses of excessive/pathological inheritance
- Use of templating beyond the basics
- Insist they use C++ functions for every single thing
- Insist they OOify every interface in their code
- Creating giant classes (structs) with a getter and setter for every field with no control or validation
- Going crazy with operator overloading
But according to TFA, people who prefer C are just being emotional and irrational.
 Scott Meyers. Things that Matter - DConf2017 [@27:51] (https://youtu.be/RT46MpK39rQ?t=27m51s)
I could talk to my perl programming colleagues for days about the advantages of python over perl, but what got them to switch was boto, since we were moving to aws.
While this happens, it's also an empty argument that can be applied to everything.
How about the author there misunderstands his own data on C++ because it challenges his preexisting beliefs (that C++ is "de facto" better)?
It goes downhill from there fast, to argue that those pesky people who dare to not want to use C++ are irattional, conditioned from childhood, etc (those willing to use C++ are not, because of course C++ is the only reasonable choice a programmer can ever make between C and C++).
>In fact, Saks found that quite often logic, facts and the truth were simply not sufficient enough to convince people. Instead, people reacted in a very irrational and emotional way, and kept sticking to and defending their beliefs. Peoples basic reaction was show me all the data you want, C++ is still undesirable.
The problem in the paper is that some BS arguments and numerical data in favor of C++ (which I'm assuming they have -- they fail to mention any of them in this article) are conflated for "THE truth".
Sorry, author, but you are not showing people "THE reality", you're showing them some arguments and some numbers.
The programmers you are talking to (those that have tried both C and C++) are the ones that have actual empirical experience from actual reality on what C++ gets them -- and whether its worth the tradeoffs they've seen.
For one, there's an ergonomic factor in language and API design (it's usability) which can be highly subjective -- and syntax/api usability is one of the big reasons people dislike C++. This issue cannot be shot down with any "objective" argument or numbers table....
- Easier to write code generators for: I use libclang to read in annotations that will generate new code according to where the annotations are being used. If I have to take care of every edge case and new features added to the latest C++ standard it would make the code generator more complex.
- Using plain old data structures: My code generator generates new code to be able to work with the plain old data structures which data can be interleaved or non-interleaved using data-oriented design. Classes will not add that much value.
- C compilers are easier to write: I integrated the tiny C compiler inside my program to be able to compile C code on demand. The C code can then use the code I've already written.
- No name mangling by default: I dynamically load a lot of plugins and do not want to bother with binary incompatibilities all the time (if compiled by different compilers like the tiny C compiler for ex.).
- I mostly use libraries written in C
- Low-level access
If I need concepts or meta-programming etc.I can already use Nim or write my own code generator,else I would rather choose something different than C++.
Update: added some explanations
Though, Keeping thIngs Simply Stupid, is really the only lesson one has to know/learn. When C++ has a specific function go with C++, when C is sufficient use C. If a bash script is sufficient use that.
If people could just show why something is cool in what situation, instead of why something should be used over something else. And this includes guides 'how to switch desktop OS' too.
Each side of the argument have their own set of premises/axioms to come to certain conclusions but there are always unknown truths which people tend to ignore. if there are no unknown truths then the argument should be contradictory
You start criticizing people for not believing you when you have one example comparing two different programs implementing an unspecified task on an unspecified compiler. This is ridiculous.
I mean, for crying out loud, your headlining video is a talk by a person who admits that he hasn't done the thing he's trying to convince people to do.
He says things like this:
"You don't want to wait for the market to take care of this, you would like to take some proactive steps to be able to make more of the people who should be using C++ willing to use it."
in the context of aerospace, which clearly he has no authority to speak on, since he misses one of the fundamental reasons why C++ is not popular or even acceptable in much of aerospace: implicit allocation. Implicit allocation is incredibly dangerous for high assurance systems. You really need to know exactly how much memory can be allocated, when, and what state the exact allocations will put the allocator in. C++ has some facilities to manage this, but man, it is easy to drop a plane from the sky by assuming that your allocation did what you wanted instead of verifying it.