I welcome Sega's announcement and will be delighted to hand over $1.99 to disable all ads - I know their games are of a known quality, and will come without suprise violence included etc.
By way of example I have one simple game he loves to play that randomly brings up images of a guy holding a girl in a headlock with a gun pointed at her head.... and the same ad comes up repeatedly. I can't even disable it via an in-app purchase (trust me, I tried).
As an aside, I'd welcome some suggestions of games he can play, and if anyone reading this is a game developer I'll be happy to provide any imnput to something you are dreaming up.
EDITS: just for clarity of reading
Also, at the current rate of release, it will take years to reach nearly all titles so I would take this with cautious optimism. Your favorite games may show up tomorrow or 3 years from now.
SEGA Forever is a free and growing classic games collection of nearly every SEGA game ever released from every console era Master System, Genesis/Mega Drive, Dreamcast, and more. Available on iOS and Android mobile devices.
-Save your game progress
-Leaderboard -- compete with the world for high scores
-Controller support -- fully integrated wireless Bluetooth controller support
-Games released every month; download them all!
This is how old games should be handled throughout the industry when possible. The likelihood of someone not already familiar with a title or franchise to play it is a function of A) its cost and B) how dated it is. Once a game is seeing marginal returns, it's kind of a very corporate mindset to try and suck it dry of every last penny. Especially when you view games as a form of art.
I fear for so many incredible titles, especially as we possibly enter a real VR age.
Unless I force it on them (I probably will), my children may never give a second glance to the titles I grew up with and consider masterpieces, when they could sensually immerse themselves in a modern AAA or VR title.
So many great soundtracks, assets, feats of code, all deserving to be in a museum somewhere, lost in the ever-growing sea of content. Eventually only treasure-hunters like myself seek to experience and appreciate them.
Not only that, Sega can much more accurately determine what franchises might see profitable continuations, given a large enough sample size.
Having not played any of these titles on mobile myself, I can only imagine that Sega has ruined this very noble idea with intrusive ads and a payment scheme for removing them.
This may require paying natural gas generators for their ability to quickly throttle to back renewables, but only as a temporary measure until utility scale batteries fall in cost.
EDIT: Someone above provided a link that provides the figure I was looking for:
"Between 2010 and 2016, subsidies for solar were between 10 and 88 per kWh and subsidies for wind were between 1.3 and 5.7 per kWh. Subsidies for coal, natural gas and nuclear are all between 0.05 and 0.2 per kWh over all years." 
I wonder how much of a subsidy there is for LED lighting. A lot of energy goes for incandescent lighting.
Also, there should be a lot of subsidies to replace heaters in building burning #6 and #4 fuel oil which is very, very dirty and pollution (NYC where I live banned #6 a few years ago but #4 is allowed to persist until 2030 I think).
Its pretty slim pickings right now, Ferguson said. God is not manufacturing more coastal property.
"Thats because the market is so oversupplied that its even difficult for the wind guys to make money at these electricity rates. And besides, its hard to acquire land by the water at reasonable prices."
Its pretty slim pickings right now, Ferguson said. God is not manufacturing more coastal property".
In 70 years it will be nice and cheap at this rate, which is ironically and sadly just the sort of problem cheap wind power would help alleviate.
"They were actually worried about an 'energy crisis' back then. Didn't they realize free energy falls from the sky all day long?"
Please consider making it a priority - it looks like someone tried to pull request it but that failed? The older format uses a custom AES-based KDF - and while I don't personally see any major issues with it, I'm much more comfortable with the modern, heavily reviewed Argon2 design used in the KDBX4.
[Edit: missing word]
My goals currently are: internalization, nicer UI, clean and extensible code base. I already did options page with material-ui and react. Currently working on replacing jquery popup implentation for hyperapp, which appeared here on HN yesterday.
If interested I can send instruction how you can build extentions. I would like to see this as official part of KeePassXC and willing to donate for free. What you guys think?
You can try options UI on https://mauron85.github.io/keepassxc-browser/preview/
Not that there's anything wrong with that. I'm just curious if KeePassXC is yet another fork, or if it's from the same people who did KeePassX. KeePassX has an excellent security reputation, so it'd suck if an unrelated fork ruined that.
BTW the Windows portable version link on your download page is 404ing:
Incorrect URL: https://github.com/keepassxreboot/keepassxc/releases/downloa...
Correct URL: https://github.com/keepassxreboot/keepassxc/releases/downloa...
1. Unlock using Yubikey
2. TOTP 2FA
3. Diceware password generator
4. ASLR for in-memory security (didn't expect this!)
5. Portable and Single instance mode (I'll have to check this one in detail)
Thanks for your work team!
* Optionally display a secret as a QR code
* Generate and validate BIP39-compatible seeds (like Diceware but with a checksum. Many Bitcoin wallets these days accept them)
* Get this into Tails
Physical object authentication is great except physical objects are less durable than brain memory (or at least, if my brain memory is gone then I probably would have no use for the password anyway).
I'm getting the feeling that this uses the older protocol?
I got a good answer to the above question from desdiv so I'm adding an edit:
Is there a reason to use this (and not use this) in place of KeePass2 on Windows?
1) Download website favicon (no clue how though, tried entering website but didn't see an option to download favicon)
2) Command line interface, no clue again how to use.
2. Don't get certificates. If you meet a prospective employer who seems intensely interested in them, that's a red flag about that job.
3. The idea that you should aspire to being able to do your whole job from a Linux terminal is pretty silly. Use what works for you.
Maybe it takes more than 6 years in offensive security to realize this, but the #1 bit of advice for this field is: learn to enjoy coding. The worst possible place to end up in security is as a captive to available tooling.
To me this was money well spent.
Edit: To expand on the cert topic... if you want to do computer forensics for law offices, police departments, etc. You'll need a technical cert (GCFA, etc.). And having a CS/EE/CE degree won't hurt either. You'll have to have a cert to do serious forensic work.
Is blue better than black?
Do red pens last longer?
this is a very cool use of shaders for raytracing, but it is not terribly informative.
See, I'm old enough to remember this battle starting for music and movies. We know how that ended.
But now, it's for the very knowledge that drives our civilization.
"Stallman was right", oh how that statement is going to get tested.
Likewise, if research output is difficult to access, the feedback loop between ideas and implementation is broken; folks outside academia can't easily comment on the cutting edge work in a field, and academics only have to worry about what other academics think of their work.
Suppose one is going to write a paper and he need to add a reference of another paper which he has got from Sci-hub. So is there any check or any sorts of things of originality of reference paper ?
(I don't know exactly how publishing works)
An online book by UCLA economics professors Michele Boldrin and Mark Levine, making the case against intellectual property -- patents, and copyright most especially.
The opening chapter leads off with the patent battles of James Watt, which are credited by some authors with setting back the start date of the Industrial Revolution from 1769, when the patent was issued, to 1800, when it (after parliamentary extension) finally expired.
Joseph Stiglitz, "Knowledge as a Global Public Good," in Global Public Goods: International Cooperation in the 21st Century, Inge Kaul, Isabelle Grunberg, Marc A. Stern (eds.), United Nations Development Programme, New York: Oxford University Press, 1999, pp. 308-325.
Is is also our duty as the people to reduce all expenditures on software? Is piracy justifiable especially within government institutions?
I guess my point is that necessary is a really strong claim and you can justify a lot of crazy stuff with that. Scientific progress has continued on just fine despite these cartels. With no supporting evidence I'd argue that today's scholars have hundreds of times the free information available than they did a century ago and that ramps up the further you go back. It's easy to imagine that Elsevier's lockdown of a paper is the difference between an academic breakthrough or not, but in reality that's probably not the case, even if it's a noble cause.
They showed that from their very early days. If you supported the company until last year, then you supported their behavior as a whole. You cannot have your pie and eat it too.
The yawing motion at the beginning of the video is because they moved the drone ship to avoid stormy seas, so the stage had to thrust sideways to retarget. In calm weather SpaceX positions the ship right along the ballistic path, so the stage only needs to pitch up and "flip."
You can also see the grid fins "pulling up" through the atmosphere to bleed off as much speed as possible. I described the optimization a while back. https://news.ycombinator.com/item?id=14288431
Fantastic job to everyone at SpaceX!
I mean, the company was founded only 15 years ago, they started (with success) launching stuff into space only 10 years ago and now it feels like they are able to launch rockets into space every week. Reusable rockets should we add.
Musk very often sets impossible deadlines, but in this case, even if you take a step back, it's scary to make 10 years predictions based on this company track record !
As a business, that's been SpaceX's biggest problem. Customers like the pricing but not the long delays. Finally, SpaceX seems to be getting past that.
Getting pad time at Canaveral is a bottleneck. SpaceX is still building their own launch site at Brownsville,, TX, but that's going slowly. All SpaceX has there right now is some fill that's settling (the location is on sand maybe 2m above sea level) and a dish antenna. Next to be built, the fire station. First launch is now supposed to be no earlier than 2018.
Cast and cut titanium. They are about 4x5 feet and some of the largest (if not the largest) titanium castings in the world.
Titanium is an amazing material that is super hard to work with (special furnaces), and has its own sets of risks (titanium fire any one). I would love to see what goes into making those things because it simply has to be impressive.
If Elon can, I need to, as his protege (again metaphorical)
... and import it to Stellariumhttp://www.stellarium.org/wiki/index.php/Satellites_plugin
P.S.: Stellarium 0.16.0 released few days ago!https://sourceforge.net/p/stellarium/news/2017/06/stellarium...
So considering that, SpaceX has not proved anything, yet. Because the impossible or hard part is not launching and landing rockets. Hard part is to do it..
1. With same or more reliability than using completely new rockets.
2. Launch with enough frequency to justify the reusing procedure..
So yea. A couple of launches and reuses does not prove anything. It is a start, sure. But they have not yet proved others who didn't attempt this yet..
Problem: Under complex micro-architectural conditions, short loops of less than 64 instructions that use AH, BH, CH or DH registers as well as their corresponding wider register (e.g. RAX, EAX or AX for AH) may cause unpredictable system behavior. This can only happen when both logical processors on the same physical processor are active.
I wonder how many users have experienced intermittent crashes etc. and just nonchalantly attributed it to something else like "buggy software" or even "cosmic ray", when it was actually a defect in the hardware. Or more importantly, how many engineers at Intel, working on these processors, saw this happen a few times and did the same.
More interestingly, I would love to read an actual detailed analysis of the problem. Was it a software-like bug in microcode e.g. neglecting some edge-case, or a hardware-level race condition related to marginal timing (that could be worked around by e.g. delaying one operation by a cycle or two)? It reminds me of bugs like https://news.ycombinator.com/item?id=11845770
This and the other rather scary post at http://danluu.com/cpu-bugs/ suggests to me that CPU manufacturers should do more regression testing, and far more of it. I would recommend demoscene productions, cracktros, and even certain malware, since they tend to exercise the hardware in ways that more "mainstream" software wouldn't come close to. ;-)
(To those wondering about ARM and other "simpler" SoCs in embedded systems etc.: They have just as much if not more hardware bugs than PCs. We don't hear about them often, since they are usually worked around in the software which is usually customised exactly for the application and doesn't change much.)
The issue was being investigated by the OCaml community since 2017-01-06, with reports of malfunctions going at least as far back as Q2 2016. It was narrowed down to Skylake with hyper-threading, which is a strong indicative of a processor defect. Intel was contacted about it, but did not provide further feedback as far as we know. Fast-forward a few months, and Mark Shinwell noticed the mention of a possible fix for a microcode defect with unknown hit-ratio in the intel-microcode package changelog. He matched it to the issues the OCaml community were observing, verified that the microcode fix indeed solved the OCaml issue, and contacted the Debian maintainer about it. Apparently, Intel had indeed found the issue, *documented it* (see below) and *fixed it*. There was no direct feedback to the OCaml people, so they only found about it later.
$ geteltorito n1cur14w.iso > eltorito-bios.iso # provided by the genisoimage package on Ubuntu $ sudo dd if=eltorito-bios.iso of=/dev/sdXXX # replace with your usb drive with care to not write over your disk
Many people have neither the interest nor the hardware access to overclock, and these processors have less overclocking headroom than earlier designs. Nevertheless, the hyper-threading hardware itself generates heat, restricting the overclocking range for given cpu cooling hardware. In this case, turning off hyper-threading pays for itself, because one can then overclock further, overtaking any advantage to hyper-threading.
If Intel used marketing names that were more closely related to technical reality, then when something like this happens they wouldn't have so many customers finding themselves in the "maybe I'm affected by this horrid bug" box.
If so, there's a way to disable hyper-threading, but you need Xcode (Instruments).
Open Instruments. Go to Preferences. Choose 'CPU'. Uncheck "Hardware Multi-Threading". Rebooting will reset it.
On laptops, some i5s are not real quad cores but dual cores with Hyperthreading.
Well done Debian folks!
Oh well... so far the machine (running Windows 10) has been stable minus one or two random lockups in 2 months of heavy usage which could be attributed to this. Guess I wait...
I've been using the thunderbolt 3 dock with two external monitors and occasionally get a little glitch prolly loose cable I think.
I've downloaded the bitcoin blockchain, done quite a bit of work in pycharm + chrome, multiple projects, flow and webpack in the background and haven't had any sort of crashes tho.
This isn't a hypothetical; what did Intel do when the only fix for broken functionality was to disable TSX entirely?
Also the advisory seems to imply that the OCaml compiler uses gcc for code generation, which it does not -- it generates assembly directly, only using gcc as a front end to the linker.
And is the microcode fix available for non-Linux systems yet?
I installed debian 9, installed virtualbox, vagrant, setup a clean development machine for myself, everything took 4 hours to finish.
I reboot the virtual machine, and boom, there was a kernel panic which I sadly don't remember exactly / didn't take a picture of. After I rebooted the machine, and opened terminal, the system froze. The cursor wouldn't move. Reboot again, motherboard has a CPU fail/undetected light on. Couldn't get it to boot after that.
I am both sad and relieved that bad stuff exists, but it's being patched to prevent proliferating.
I sincerely hope I'll get a replacement from Intel.
I wonder if intel will do something like that again or if the industry as a whole is more tolerant of unreliable / buggy behavior and will just live with it. Examples of Apple just telling people that the poor reception strength was their own fault / changing software to hide problems / etc.
To this day I disable it by reflex on everything!
AMD on the other hand doesn't even acknowledge an issuewhen multiple customers report problems. See this Ryzen bug:https://community.amd.com/thread/215773
The banks outfitted buses, bars, pretty much everywhere with readers but even after inducements to use it such as half price beer(!) it still failed. Why? Because it was soooo slow. Waiting for ~45 seconds at the bar for a payment to go through got old really fast. It barely lasted a year.
I'd have thought the friction of the payment would have been a lesson learned, but here we are 22 years later and it's still a pain.
Until some years ago, most terminals would mirror that. Most prominently, they used to have separate "enter pin" and "verify transaction amount" steps, and included longer delays for displayed status codes. Recent devices have started combining these steps ("Amount: xy. Enter PIN to confirm") and status messages.
Newer use-cases like the contactless qVDSC application have been tuned for better performance, limiting the amount of communication between reader and card.
For more details, have a look at this guide from VISA: https://www.visa.com/chip/merchants/grow-your-business/payme...
Back here in Australia, almost every retailer (including those on 3g eftpos machines) takes < 4s from when i tap my card, to when I can start walking away. So much quicker than cash :-)
However, with chip transactions there are multiple calls for different payment processing flows. For example, a transaction could require 5 round trip request responses from the chip to the payment process meaning 5x the time required.
Never received a (note: I know, we can all make guesses) conclusive answer explaining the difference.
This has been a mess since the mid 90s, when I first worked on these things.
Here a cruddy not at all usefule link to the standard:
It's very common for bars and restaurants to have a dedicated line for the terminal, but usually they'll skimp on tech (have seen dial-up over POTS or in a fibre-capable premises). Also very common to use 3G or 2.5G.
It'd take a tech all of 5 minutes to diagnose and suggest a fix for 98% of these slow terminals. It's strange seeing businesses not look to fix these issues. If I was a payment provider I'd probably run diagnostics against my customers terminals every day and force poor performing customers to have someone come in and fix it.
I thought it now depends on the firmware in the card readers, which it seems companies like Index control.
This list is missing RF chips, though I suppose success is from RF technology on chips, not any single killer chip.
A good way to resolve this would be to reform the accredited investor laws into something more meritocratic. Instead of needing to own one million dollars in assets, there ought to be some sort of knowledge-based competency exam so that regular people can invest in ideas they think are worthwhile.
1. The purple cow effect - the opportunities for highest growth may be in underserved segments which are best addressed by founders less represented in Silicon Valley. (the next big thing may be a farming startup in India, but the founder won't fit the well connected or well advised by people with startup exits model this framework uses, and thus will be missed by SV investors, which leads to problem 2. )
2. The money trumps product problem - whatever you can't beat with a solid product, you could always hammer with more money in the bank. Instead of hearing about a farming innovation in India, American farmers could be getting FarmVille ads on TV instead and tuning out. Since VCs invest locally, even if a startup starts picking up steam in Chicago, SF investors who don't have a toe in that pond, pick the local fish that eats the same algae and fund it far better than the Chicago company, which might have a better product. In a land-grab industry that money may be enough to gain adoption to the SF pond-dweller, but returns for the entire market will be lower, due to lower product quality and tendency of big firms here to pick only one company per industry.
So the framework, biased by past data, may skew future data away from results that would be optimal without a framework.
But this problem hasn't gone unnoticed and there are some ideas around how to solve the "pick the rare winners" problem:
1. Andreessen: Don't pick winners. Invest in the startup after it has already demonstrated itself to be a winner but before it goes public. This is the safe growing area.
2. McClure: Invest small amounts, early so that you can afford to spread the investment over many companies.
3. Thiel, Gurley: Be right
4. Graham: combination of #2 and #3
5. Doerr: Network like crazy to have a shot of being in the few good ones (this assumes you will recognize them)
#3 is not necessarily something we can reproduce
The real optimal setup here would be to pair that kind of mathematical rigor with the dealflow of an a16z or KP. I would suspect that both of those two would say that a similar model exists in the heads of their partners so far as pattern recognition, but..
pharma::drug studio::movie vc::startup
Reading the docs, though, it does seem like it's one legendary AWS outage away from being a huge problem:
>Toruss infrastructure has been designed from the ground up for resilience without any single point of failure. All of our services are autoscaled and run in multiple availability zones in the us-east-1 region.
Hashicorp Vault is more difficult to put in place but it does the right thing. With it's custom backend it can generate temporary tokens, for example to access the database. Those token are short-lived and part of the audit log.
I personally found the experience of using Torus to be great. Getting a quick working setup is easy and it doesn't take much effort to transition from there to locked down access control. Will likely continue using it outside of the Docker context.
For example, say I don't need to optimize my command submittal, or I just want the main device. Or maybe my common use case is a compute shader just like the program submitted in this Hacker News post. I would want a class that just exposed a simple API like runComputeShader() or something.
EDIT: Ok, not the pictures of the executives and the stuff they handed out for sales person of the quarter :-)
I even worked for them in Nashville before grad school. Sold a lot of cell phones, but my favorite was selling a karaoke machine to Harry Connick, Jr.
I also loved shopping there... the items seemed to be obscure, growing up I always wondered who actually shopped there for these items to justify an entire store... but the demand was there, they just didn't figure out how to take advantage of the early-mover advantage
What a strange approach. Why not create a function with the correct name, put the code in it and redirect the misspelled function to it?
Mark the bad function deprecated, warn the teams and after a few release cycles, remove it.
So he teases a significant detail of the answer, then notes that not only won't he explain but he will not allow anyone else to explain either - no reason given. Then, when people unsurprisingly do discuss it, he gets mad and deletes the comments.
That's in addition to a lot of aggressive or condescending answers I've read to what seem quite reasonable questions.
If commenters are such a problem, wouldn't it by now be better to simply close comments completely?
It goes way back to the murky beginnings of Unix.
Ken Thompson was once asked what he would do differently if he were redesigning the UNIX system. His reply: "I'd spell creat with an e."
Go figure why Raymond Chen decided to do a blog post on this though, since apparently that is a verboten topic which leads to a weirdly sparse article that doesn't answer its title question, and a censored comment thread.
It's annoying, but given that it's nearly a decade old it's probably not worth getting upset over it.
"This program was designed to maximize the bother function for structured programmers. This program takes goto statements to their logical conclusion. The layout and choice of names are classic."
If you aren't serious about actually removing a bad API at some point, don't change anything. Otherwise, you create two things that need to be tested/supported/kept binary-compatible/etc. instead of one, raising technical debt when you were supposed to lower it.
They call this out in a couple of places but they never seem to completely connect the dots, there is "He renovated the first floor to attract customers from farther away, customers who might have more money to spend and more places to go than Johnstown." to pull money from towns further away, or "But fewer people can afford his products now that the good jobs are long gone, and Mr. Apryle has had to make adjustments."
It's not Amazon, its not 'big box' chains, its that the city no longer has a production base and so the fraction of GDP this space used to produce has gone away.
I'm pessimistic that the federal or even state governments have the will to do anything about it -in terms of policy to effect change.
Not that top down change always works (Japan for example, has made many, many half-hearted but ill-conceived attempts at restarting their growth engine) but smaller nimbler economies have been able to manoeuver economic obstacles (like Taiwan, Singapore and S. Korea) whereas others have stumbled and fumbled (Malaysia, Mexico).
That said, we need to try something. Trump tapped into this angst but does not look like he (or the party) will deliver in the least. Never the less, the issue is not going away and will only become more pronounced. Someone will have to do _something_ about it. People will get upset and more radical elements will be elected till someone begins to take notice and does something substantial about the economic decline of the part of the middle class which got by on medium skills.
The lack of demand snowballs and branches out to impact many other things the exact same way that a strong increase in demand spreads out to impact other things.
With a predominantly consumer driven economy, eventually all of this will catch up and be a significant drag on GDP at best, though it'd be easy to argue we're already at exactly that point.
So, what happens next and what do we do about it? If only we had some sort of large team of people who were elected to solve this kind of problem, and if only they had some sort of historical periods of widespread prosperity to reference and model policy on....
If you want to work, you should look for what people need rather than whatever you did for a job 20 years ago. For example, we'll always need energy and food, but we won't always need cowboys and coal miners. Instead of trying to be a coal miner, try to be an energy worker.
Rural counties and small metropolitan areas account for about 23 percent of traditional American retail employment, but they are home to just 13 percent of e-commerce positions.
Almost all customer fulfillment centers run by the online shopping behemoth Amazon are in metropolitan areas with more than 250,000 people close to the bulk of its customers
Im thinking about whats next, he said. Were essentially thinking of Johnstown as an economic development laboratory.
In the UK we had many communities thrown on the scrapheap with mines, steel and much else manufacturing gone. Initially people invest their redundancy money in things such as a new dog-grooming business, a cafe, a shop, perhaps a tattoo parlour, depending on what 'follow your passion' leads to. So these businesses go okay for a while, eventually the redundancy money runs out, or, in rare instances, the 'follow the passion' business actually meets a genuine need and a success story happens.
Around the time of this general decline my sister was trying to raise money to go somewhere fancy with her group of friends. Being skint she decided to raise some money by making things - things with beads, jewellery, that sort of stuff. These items sold 'well' but only to her friends whom she was going to be travelling with. There was no 'external market'. So rather than go on the big trip to the festival they missed out on that and spent what little money they had on beads etc. to make stuff to sell to each other. A lesson in economics was learned the hard way.
To some extent any town/city/country that does not have manufacturing and external markets will be a variant of my sister's schooldays model of capitalism. If manufacturing (or mining) jobs go and retail comes along to 'fill the gap', then it cannot last forever. Tourism can't come to the rescue either.
Where Boost is a bit lacking is modularity and somewhat varying library quality but this is kind of expected considering the number of subprojects.
With increased modularity, I'm sure this situation will be improved.
There is a huge amount of instantly-deployable capital in Ethereum and Bitcoin, held by individuals now seeking ways to put it to work. Many technologists will naturally make use of this energy, but so will BS artists and people under the influence of hubris.
I am making this appeal because it is irresponsible to stand by and watch so much financial energy be directed at projects without appropriate constraint. Many firms funded by ICOs (or soon to be) have qualified leaders and developers and are developing great ideas, but IMO many are also raising too much money for their own good.
What may be needed to guide these rockets to their full potential are maturity, mentorship, and the careful application of jurisprudence and principles of accounting.
Will a non-nation state be able to generate enough credibility to run their own currency that lasts a lifetime?
I don't know be I think it's interesting to watch.
And if they do, it may force central banks into a corner by limiting their ability to print money (i.e., lowering the value of your currency by printing money will cause people to move their money to the alternative currency, leaving you with inflation).
Would they be saving on transaction costs?
I believe most of the money that is ending up in Bitcoin exchanges is laundered in the first place. Also, there is a whole lot more that is getting laundered everyday than Bitcoin being mined. That would mean Bitcoin will keep going up in value. Gold should correspondingly fall.
Umm no easier than raising funds for any other venture, in fact probably more difficult and I would think a lot of that money is coming from people who were already made rich by other coin like bitcoin. Easy come, easy go, but enjoy the gravy train while you are riding it.
With age comes maturity. It's allowed me to get along better with my co-workers (so many upper 20s, lower 30s tend to be a bit... hot headed) and I'm not afraid to negotiate. The older I've gotten, the more comfortable in my skin and in my skill set I've become.
If there's ageism I've yet to experience it and I've worked with people well in their 50s doing mobile. It comes down to who you're working for, what your skill set is, timing and, in my opinion, health. You've got to stay healthy and look healthy! Also, I tend to prune my resume, no one wants to see your experience 8+ years ago.
Or maybe it's all luck, ask me in five years what I think when Objc/Swift goes the way of PHP, I might be singing a different tune.
I am constantly pestered by recruiters and companies to interview. I think one of the things that helps is that I trimmed my resume to omit material older than 5 years, removed unnecessary dates, and I make a point of drawing attention to studying for new industry certifications. It probably doesn't hurt that I stay physically fit, either. As cruel as it may be, if you're out of shape and look "frumpy" or "run down," that will count against you far, far worse than your age.
The key is to make the age factor irrelevant by not drawing unnecessary attention to it or by projecting a stereotypical "middle aged" image. We can argue all day about whether that's fair or not (it's not), but you have to do what you have to do.
Many 40+ year olds have families or other life obligations outside of work, and thus they may not be as willing to put in absurd hours that a young employee could be squeezed for. The older employee also might be more likely to take weekends off, and want to use vacation time.
Additionally, someone over 40 is supposed to be in or near their peak earning years, which lasts ten to twenty more years (or did historically anyway) so their expenditure is going to be significantly more than someone with a few years of experience.
I realize it's one of the lamest analogies possible, but for many companies, employees are quite literally a cog in a larger machine, and so they want the cheapest possible cog at a reasonable quality level that works the longest before breaking down (quitting, getting fired, burning out, etc).
To fight this, I'd bet those over 40 would have to aim to get into important management and executive positions, which are less likely to be swapped out for less experienced, less demanding, and cheaper labor.
This is why I decided to start my own company again after 4 1/2 years at Square which was probably the last time I ever would be working for someone else.
This way age becomes an asset rather than a liability.
This is exactly the sort of thing they make better.
Job applications generally don't and possibly can't ask about age, but it's a required field on applications for YC or other incubators.
It isn't a perfect approach, but it makes it more difficult to discriminate since by the time you have a face-to-face interview with the employer you are already well along in the interview process.
Not trying to be in denial, but all these jobs appear to be blue collar jobs (assuming "administrators" is office admins and not Sys Admins / Network Admins :) .
Any data on whether this happens in our Tech / I.T. Industry, where every other month, you read a story on severe shortage of skilled tech workers everywhere?
Challenge is to prove it.
Might this be a clue?
Any job application which is in the reaches of an automated process must be a joke endpoint. How many other automated applications do they get selling employee skills, sex, penis enlargement pills, fast loans, malware and other trash.
Disclosing information should be the first "BS" smell for a job. I'm often well into getting work done before the person I'm working with figures out how old I am, where I live or other personal details. Granted, that's freelancing.
I live in the Philippines where the age requirements are actually advertised. And these ages seem pulled out of a hat. And I get the sense that people running the show at all levels couldn't tell their asses from a hole in the ground. It's a pleasant surprise to find someone who seems competent at convincing you that there's some purpose for them taking up a spot at that spot or role they are taking up in a serious time commitment out of their life.
Clearly the hiring process is just as broken as everything else. Why expect that hiring is going to be significantly more awesome than the rest of the system?
Don't interact with machines. Get to know real people. Show people what you can do. Preferably find people who tell you they could use your help rather than you telling them that you need a job. ;)
As a hypothetical employer I might think these roles are lesser roles, so if you are older I would wonder why you had not been able to secure a greater economic situation.
Second, as consumer facing roles, attractiveness is a benefit, with those over 40 having less of it. It is not discrimination on the basis of age, but attractiveness that would be the cause.
Another talking point: the trend of waiting until late 30's to have kids greatly compounds this problem.
I work fifteen hours a day, can't stop learning and trying new technologies to keep myself on top of the wave.
Open question - is there any research on to what degree these three worries are true?
First off, companies like Cisco, Microsoft and Oracle, and even Google, are running entirely off of the centralized education system in the US. These are companies that got their software into curriculum and taught everyone their way of doing things, then engaged in shoving their software and a lot of labor into large organizations making a gargantuan mess that was glossed over with lots of "free overtime". Compare Cisco CLI to Juniper, or Microsoft to Debian, or MS SQL to Oracle; who's going in who's direction.
If you made the investment in understanding exclusively those companies products you went along the technological imperialism trip and now that you're on the other side, and you never spend time understanding the theory or building critical thinking skills, you're washed up. 20 years working on a massive oracle mainframe or with purely Cisco R&S becomes a liability, the reason being, you never tried to find a better way to do things or try to eliminate your job and replace it with something better.
There's an honesty in Meritocracy; The market has always valued the independent thinking, hard-working, incredibly knowledgeable IT staff with a tremendous depth of understanding of infrastructure, programming, politics, and equipment over what 95% of the IT market has become. 95% of the people I've worked with expect the solution to be in some arcane google search result or in a book; they don't expect to go on the journey of finding the answer. What they never develop is real creativity, a real understanding of the systems they work with, or a real understanding of the architecture, why things are done, or the process of how to build on themselves; to set a path for themselves and others that that eventually brings about a finished product.
The entire IT industry is maturing and getting older and as they do, older staff that haven't done this is viewed as a liability. I'll agree, there's all kinds of ways to try to hire gullible people who don't know their own self-worth. Fact is though, those kinds of companies are on a long-term death spiral of their own making. Every time a large corp outsources, I go look at the 10-k and I see a major cash flow problem of managements making. "The old cranky sysadmin way" is beginning to take at more and more companies and that will trickle into academia as time goes on as management begins to understand what technological imperialism means and what the results are; generally, a total mess.
It's a very controversial thing to say these things because it makes a lot of people who aren't that good, or who invested their time in the wrong things feel like they are doomed. Fact is, there's no set career path in IT like there is in other fields like Attorneys and Lawyers, Stock Brokers, Research scientists and Academia.
The trick I've discovered is to put in no more than 40hrs a week at work, and if overtime is needed, come home and practice, do architecture work, learn algorithms, make good notes, read programming and architecture and project management books. 40hrs a week is for work, 10-20hrs a week is for self-betterment. Then you come into work, and find ways to eliminate your job. A new approach that saves butt loads of time. Get your assignments done early, then either come up with a new project to work on, move on, or study. Within a few years of doing this, you will be a top-tier programmer\architect\systems admin, whatever you want to do.
And while I do feel for people who feel they've fallen behind due to having a family, the fact is from my perspective, the real issues with society are things like 21% of GDP being spent on a scummy healthcare industry, or high incomes of the top 1%, or lack of wage parity tariffs on imports from China. The baby boomers have really messed things up for us. The fact you can't go from a high paying IT job to a factory job or retail management position and still have enough money to put your family in a decent home with 3 hots and a cot and to put your kids through school and college is a failure of society in general, not the IT industry. Those issues need fixed and frankly, contribute a heck of a lot to our messed up society.
Life is challenging for me now, and this article directly applies to me. Oh, and there's that thing about being a white male and suicide. Now imagine being a kind of intimidating looking type in a very non white anti trump area.
Tribalism is real. Had I stuck with my tribe early on I'd be more secure. My demise is probable at this time.
I am solid in my desire to self terminate yet lack the ability to overcome fear of death. I stay alive but it's closer than ever. It's almost a humane thing to let me go. I wouldn't wish my brain on anyone.
Good luck all.
Imagine observing a man shaking his leg, first one then the other, then his whole body convulses and twitches - is he dancing ?
Absent the knowledge that a wasp has flown up his trousers leg.
Copying without comprehension may lead to getting stung !
Inverse Reinforcement Learning  to reverse engineer goals will be needed especially for embodied AI in Partially Observed Enviroments, i.e. the real world (as opposed to simulations).
Berkeley's CS294-112  Deep Reinforcement Learning for Robotics provides good coverage of methods of mirroring, DAGGer, Deep-Q, iLQR, and IRL.
1. Calorie restriction is almost impossible for most humans to follow long term
2. Many of the same benefits of calorie restriction may be achievable through intermittent fasting, which is much easier for people to follow. I've been experimenting with a daily 16 hour fast (all calories consumed within an 8-hour window) and have seen small improvements in my energy and general well-being, although it's decades too early to say how this will impact my longevity.
It's amazingly difficult to find clear information anywhere. For instance, CDC Nutrition guidelines?! It's an example of something that, in trying to be comprehensive, results in a mess of awkwardly qualified terms and difficult to digest (couldn't help it) high-level recommendation.
USDA Food Patterns, Healthy US-Style Eating Pattern. https://health.gov/dietaryguidelines/2015/guidelines/appendi...
Meat poultry and eggs in the same line? They are completely different foods with wildly varying nutrients.
[UPDATED: good start] - Pictorial Nutrition Guidelines: http://www.fremont.k12.ca.us/cms/lib04/CA01000848/Centricity...
- Comparison of International Dietary Guidelines and Food Guides in Twelve Countries across Stages of the Nutrition Transition http://www.fasebj.org/content/29/1_Supplement/898.36.short
It seems that "calorie restriction" is a misleading term. If these funds are correct, CR should be considered the norm/ideal. That means the majority, currently, over-consume.
That aside. I wonder if it's also related to modern food, and the production there of. If inflammation is the root of most disease, and CR, I presume, reduces inflammation, then what is it about our foods that trigger so much inflammation?
Finally, when all said and done, I predict this will be found to connect with gut bacteria in some way. That is, CR effects the gut, and that effect is ultimately a positive for the whole body.
I also dropped almost all of my sugar intake. I drink about 3L of water per day and have one cup of coffee in the morning.
I've been able to maintain a healthy weight (6'4" tall, ~185lbs) and I rarely feel truly hungry. On the days where I have more than one large meal, I tend to feel tired and foggy.
Its completely anecdotal, but this pattern has worked out well for me.
The main issue with articles like this is that they propose lame diets that strip out enjoyment. If you want to eat eggs basted in butter, do it -- but balance that out with exercise or somewhere else in your diet. No use suffering so you can live a few extra years at the tail end of your life.
I wonder if habitual CR inevitably trains the body to stabilize its weight and if that can not be undone if practiced too long. That then put you at increased mortality risk when you start losing body mass in old age. Caged animal studies might miss this risk because a valued lab rat won't suffer sudden injuries and the resulting prolonged hospital visits.
Most of the articles focus on results, but not on the actual process.
If one is eating sitting on the couch, watching television, completely oblivious of the activity, it is not going to help.
There is nothing absolute in nature, everything is connected.
How do we eat is far more important than how much we eat.
One simple experiment one can do at dinner is to sit alone, without any distraction with the dinner plate and for every morsel one takes in, chew it till you count 20 and then swallow.
Notice what happens...
I know the hunger will drive you crazy in the first few days, but your body gets used to it and it's not so bad, after 3 days.
The first time you do it, perhaps start off with some 16 hour fasts at first for a few months till your used to that, then you can increase it.
- Processed food costs less than fresh food.- Multiple generations of people have been conditioned to eat processed and junk food.- People believe in the myth that 'cooking takes too long'- People are lazy and don't want to spend any time on cooking/food even when living a life of luxury compared to the rest of the world.
All of which is the opposite of most developing countries. US and UK are very close in this respect. Even many European countries value shopping for and eating fresh food much more.
CR, and eating in general should be about buying nutrient dense but low caloric foods - which is a mostly plant based diet of fresh food. The exact kind of food which is artificially expensive and considered a 'fad'.
> But the latest results suggested that significant health benefits can be garnered in an already healthy body a person who isnt underweight or obese.
The take away here is that the problem with excessive eating isn't just the weight gain. There's something about eating itself which is stressful. Restricting calories is good for you even if you're already within a healthy BMI. This means that just because you can eat anything, doesn't mean you should.
Do non-growing adults (30+ years old) enjoy greater benefits than small children who have faster metabolisms and growth spurts ahead of them?
My inutition says that even if the concept still fits well with growing children, that their cycles are different, and closer together in frequency. Maybe they don't eat more calories, but eating several times a day might possibly tie into cycles of blood serum nutrient levels properly based of weight and activity.
And it's no secret that being overweight causes premature aging, health problems and eventually premature death.
steak, sandwiches, cheese, potatoes, all good. salad, bananas, cereal, all good.
I had to make sure to look at the nutritional info. If the food was small but energy dense you have to eat less of it.
Now imagine if you wanted to have a bit of toast, maybe with some chocolate-hazelnut spread. All you would need to do is walk into the garden, grab some hazelnuts off the tree and some cocoa beans, perhaps with some cane sugar for good measure. Some blender in the kitchen would be able to make your hazelnut spread, just so long as you shelled the nuts first. Similarly with the bread, in this garden, free for the taking would be some strong wheat that you can put through some kitchen appliance, then after a few hours with the breadmaking machine the bread would be good to eat. Butter would be equally simple too, you just needed to milk a cow, put the milk in some glorified washing machine with a bit of salt, then wait a while to get the freshest butter ever tasted.
Would you be able to complete all of these tasks by breakfast? Would you really bother to shell all of those hazelnuts? Would you question why it is that you have juiced 14 oranges when actually just the one orange, non-juiced was satisfying enough and didn't require all that effort dicking about with juicing?
Of course people have allotments and smallholdings so this does happen, albeit with greenhouses instead of some magic 'always in season' aspect. But my office workmates of the obese variety have no idea of the effort needed to get their food, even if it is healthy food. The connection is not there.
One of my dream is to rock up at the doctors one day having eaten too much fruit and veg. For the doctor to recommend me to stay off the veggies and eat some sugary snacks instead. I am fairly sure that no amount of fruit and veg would age me, not in the way that sugary snacks, beef products and everything processed would.
Will calorie restriction work in humans?http://dx.doi.org/10.18632/aging.100581
Caloric restriction improves health and survival of rhesus monkeyshttps://dx.doi.org/10.1038/ncomms14063
In general, the consensus in the research community is that we shouldn't expect more than an additional ~5 years from the life-long practice of calorie restriction. The evolutionary argument is that the calorie restriction response evolved very early on in as a way to enhance fitness given seasonal famines. A season is a long time for a mouse, not so long for a human, so only the mouse evolves a very plastic lifespan. The practical argument is that 5-10 years is about the largest effect that could exist and still be hard to pull out from existing demographic data of restricted calorie intake in a bulletproof, rigorous way. Obviously any much larger effect would have been discovered in antiquity and be very well known and characterized by now.
So that said about longevity, it is very clear that calorie restriction does better and more reliable things for health in ordinary humans in the short term of months and mid-term of few years than any presently available enhancement technology can replicate.
A good deal of research into aging is focused on trying to recreate the calorie restriction response. So far this has consumed billions with little of practical use to show for it beyond increased knowledge of some thin slices of cellular biochemistry relating to nutrient sensing and energy metabolism. It has proven to be very hard and very expensive to get anywhere here.
So calorie restriction itself is free and reliable in its effects. Everyone should give it a try. There are, however, far more important areas of aging research to direct funding to instead of trying to recreate this effect with pharmaceuticals. In an age in which meaningful rejuvenation is possible to create in the years ahead (see, for example, clearance of senescent cells, something that calorie restriction can only slightly achieve in a very tiny way, while drug candidates are managing 25-50% clearance) it seems just plain dumb to instead be chasing expensive, hard ways to only slightly slow down aging.
Edit: "Permanently [...] may turn out to have a profound effect on your future life, according to [...] scientific studies."
And that's a Bingo, folks.
So the "eat less" claim in this article is again back to the old MD advice, "eat less and move more". Not going to work if you don't look at your macronutrients.
Verizon now owns Tumblr. AT&T e-mail addresses will suddenly be blacklisted from logging in to Tumblr.
Maybe I am reading too much in to this?
EditAccording to TechCrunch (https://techcrunch.com/2017/06/25/take-the-oath/) it's all cool. Except that the explanation still makes no sense. It sounds like what they are saying is that (for example) an att.net e-mail address won't be a Yahoo account anymore, which I assume means that Yahoo won't be hosting their e-mail or something like that. But why wouldn't those addresses become the "username" on Yahoo logins, in the same way that any e-mail address can sign up as a Google account, even though the e-mail itself is hosted elsewhere (i.e. my "username" for Google could be a non-Google-hosted e-mail address, but my e-mail itself has nothing to do with Google - i.e. it's not a GMail account).
The A330 doesn't have fuel dumping nozzles as standard equipment so that may not have been an option.
From a slightly dodgy source http://www.pprune.org/tech-log/117765-a330-fuel-consumption.... they'd burn 6 tonnes on takeoff and another ~7 in the 1.5 hours before the engine failed, and approx another 7 before landing but I don't know what the consumption would be like for a single engine return.
That gives 20 tonnes total fuel burn. At those figures the 5:40 flight would take 41.5 tonnes of fuel, plus a bit extra.
As per usual, more well-founded technical discussion on this incident is ongoing over at AV Herald: http://avherald.com/h?comment=4aac9f14&opt=0
It's truly frightening, and makes me think twice when any of these airlines come up when I'm searching for flight deals.
Anytime I fly in that area of the world, I make it a point to avoid Malaysian and Indonesian airlines at all costs. Fly Cathay or SingAir and their affiliates if you can. The cost premium is well worth it.