The bigger "win" in this is the transmission loss. Consumption close to generation has lower transmission loss so it's innately higher efficiency in that one regard.
PHES and Battery technology probably matter more now than PV as a the cost problem in generation: we need time shifting for solar power to replace other forms of generation, to get to serving demand outside of the sun.
Edit here if anyone's curious: https://goo.gl/maps/SjzWw9b2dSH2
Edit 2 - as an aside, when I first saw, I couldn't tell if it was solar or ag (e.g. here's what ag looks like - https://goo.gl/maps/AGvAxJ61BwB2 if you zoom out they look very similar!)
Ultimately the goal is to wean off of fossil fuels, but if all our R&D is unprofitable we may miss some breakthroughs.
As you would expect, there a lot of factors that go into how much an array would save/produce (generation, storage, etc.), but a regulatory factor that changes everything is rate structures. Rate structures are far from a standard thing, pretty much wherever you go, there's something different, it varies by state and even at a smaller, city level for municipally-owned utilities (about 15% of the US is served by these, including parts of Bay Area, LA area, Phoenix, Seattle, etc.).
For consuming energy, there is usually a flat-rate or a time-of-use rate (many varieties) but there are more and more capacity fees and fixed charges taking over. For producing energy, it gets much stranger. Many cities and states use versions net metering , some will pay you the wholesale power rate and others will pay you the retail rate (retail is ~3x wholesale), some will use a Feed-In Tariff , some will factor in a more time-based rate (like time-of-use above), and some others too. If you want to know more about rate structures in general, check this out . https://en.m.wikipedia.org/wiki/Net_metering https://en.m.wikipedia.org/wiki/Feed-in_tariff https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...
Rate structures are heavily regulated, for good reason. Their design is a very difficult task and is pretty murky. On one hand are the consumers and their desire to connect solar and other DERs  like storage to lower costs. On the other hand are utilities, usually not acting malevolent, wanting to maintain reliability, and, at all costs, avoiding the death spiral , which basically means that more people connecting solar and even leaving the grid will skyrocket costs and tank reliability. Though sometimes, the generators will desperately lobby against them. Depending on where you are, the utilities can be the generators too, another matter. http://www2.epri.com/Our-Work/Pages/Distributed-Electricity-... https://www.greentechmedia.com/articles/read/this-is-what-th...
Rate structures are arguably the largest factor in installing solar. Initial costs are important, but the rate structures will affect them over their 30+ year life. In some places, like North Carolina, it can lead to solar flourishing. In other places, hostile rate structures and other regulations can severely harm solar's adoption, like Florida which should be the best place in the US for photovoltaics.
Even more complicating is the ability of the grid to handle a lot of solar, let alone other DERs. The energy grid in the near future can be highly distributed, 100% renewable, and even more reliable than it is today, but there are some big system levels problems to solve before than (these rarely get any attention, most attention goes to node-level problems like sheer generation). I am fully engulfed in this field and am working on these things now. I thought I would present an important point and give y'all some information on this field that I find absolutely riveting. :)
This alone is grounds for opening and drinking very expensive champagne and/or wine.
However, I wish the mac release was available as independent binaries without having to come along with Xcode.
Because we didn't need to learn a new language/IDE/environment at the same time that we learned a new paradigm, we were able to keep our feet on solid ground while working things out; we were familiar with the syntax, so as soon as we realized how to "wire something up," we could do so with minimal frustration and no need/ability to Google anything. Of course, it was left to a subsequent course to learn HDL and load it on real hardware, but for a theoretical basis, this was a perfect format. Much better than written tests!
 http://www.cs.princeton.edu/courses/archive/fall10/cos375/de... - see links under Design Project, specifically http://www.cs.princeton.edu/courses/archive/fall10/cos375/Cp...
Does this confusion typically happen to engineers who are trying to teach themselves hardware design, or is it just an indication of a terribly-designed curriculum?
> The reality is that no digital logic design can work without a clock. There is always some physical process creating the inputs. These inputs must all be valid at some start time this time forms the first clock tick in their design. Likewise, the outputs are then required from those inputs some time later. The time when all the outputs are valid given for a given set of inputs forms the next clock in a clockless design. Perhaps the first clock tick is when the set the last switch on their board is adjusted and the last clock tick is when their eye reads the result. It doesnt matter: there is a clock.
Put another way, combinatorial systems (the AND/OR/etc logic gates that form the hardware logic of the chip) have a physical propagation delay. The time it takes for the input signals at a given state to propagate through the logic and produce a stable output.
Do not use the output signal before it is stable. That way lies glitches and the death of your design.
Clocks are used to tell your logic: "NOW your inputs are valid".
The deeper your combinatorial logic (the more gates in a given signal path), the longer the propagation delay. And the maximum propagation delay across your entire chip determines your minimum clock period (and thus maximum clock speed)
There exist clockless designs, but they get exponentially more complicated as you add more signals and the logic gets deeper. In a way, clocks let you "compartmentalize" the logic, simplifying the design.
 What's the most widespread fundamental gate in the latest fab processes nowadays? Is it NAND?
 or at least clock domain
Another I try to explain hardware design for people coming from a software background:
You get one choice to put down in hardware as many functions as you want. You cannot change any of them later. All you can do later is sequence them in whatever order you need to accomplish your goal.
If you think of it this way, you realize that the clock is critical (that's what makes sequencing possible), and re-use of fixed functions introduces you to hardware sharing, pipelining, etc.
But it's hard to grasp.
This is not true.
"HDL based hardware loops are not like this at all. Instead, the HDL synthesis tool uses the loop description to make several copies of the logic all running in parallel."
This is not true as a general statement. There are for loops in HDLs that behave exactly like software loops. And there are generative for loops that make copies of logic.
Also, the "everything happens at once" is not true either. In fact with out the delay between two events happening, synchronous digital design would not work. (specifically flip-flops would not work).
From the perspective of an electrical engineer and computer scientist, asynchronous circuits theoretically can be faster and more efficient. Without the restraint of a clock slowing down an entire circuit for its slowest component, asynchronous circuits can instead operate as soon as data is available, while consuming less power to overhead functions such as generating the clock and powering components that are not changing state. However, asynchronous circuits are largely the plaything of researchers, and the vast majority of today's circuits are synchronous (clocked).
The reason why we use synchronous circuits, which may relate to the reason why many students learning circuits often try to make circuits without clocks, is because of abstraction. Clocked circuits can have individual components/stages developed and analyzed separately. You leave problems that do not pertain to the function of a circuit such as data availability and stability to the clock of the overall circuit (clk-to-q delay, hold delay, etc), and can focus on functionality within an individual stage. As well, components of a circuit can be analyzed by tools we've built to automate the difficult parts of circuit design, such as routing, power supply and heat dissipation, etc. This makes developing complex circuits with large teams of engineers "easier." The abstraction of synchronous circuits is one step above asynchronous circuits. Without a clock, asynchronous circuits can run into problems where outputs of components are actually wrong for a brief moment of time due to race conditions, a problem which synchronous circuit design stops by holding information between stages stable until everything is ready to go.
The article's point of hardware design beginning with the clock is useful when you are trying to teach software engineers, who are used to thinking in a synchronous, ordered manner, about practical hardware design which is done entirely with clocks. However, it is not the complete picture when trying to create understanding of electrical engineering from the ground up. Synchronous circuits are built from asynchronous circuits, which were built from our understanding of E&M physics. Synchronous circuits are then used to build our ASICs, FPGAs, and CPUs that power our routers and computers, which run instructions based on ISA's that we compile down to from higher order languages. It's hardly surprising that engineers who are learning hardware design build clockless circuits - they aren't wrong for designing something "simple" and correct, even if it isn't currently practical. They're just operating on the wrong level of abstraction, which they should have a cursory knowledge of so synchronous circuits make sense to them.
Please ask any questions or make any comments you have about Redox!
But, we also had every customer on a VLAN, limited to only being able to send traffic from their IPs, and also blocking incoming and outgoing bogon traffic.
Years ago I attended a presentation by Evi Nemeth (RIP) related to CAIDA and one thing they found in auditing "backbone" traffic was that some huge percentage of it was bogon traffic (I don't recall the exact number, but lets say 10% +/- 6%). Nobody wanted to filter that traffic because the pipes were less expensive than the routers to handle filtering packets at high pps rates.
It's secure, they can't break out of the jail.
It's rate limited to prevent them causing much damage to anyone.
It's easy to observe every thing they type and do in the jail from the host.
This is like a wild meadow turning to a manicured lawn. The near-monoculture of the web will have a much harder time withstanding legal assault by state actors than a distributed web would have.
Yes, At least this should ring a bell to all those who still think they can write anything on FB about a Government and get away with it. Facebook while being pushed and portrayed as your personal diary is actually your digital repository only accessible for the most elite like Government.
While an average FB user can easily shame anyone around him (like how frustrated boy friends shame their ex girl friends), fellow average FBians can't do much. This reiterates the capitalist world that we live in where Democracy is just a myth.
Laughed out loud at this. I can understand wanting access to the market but this is just embarrassingly desperate.
The cold reality is that this is not true. And, thus, in time, to exist within a repressive regime requires importing the repression within the software. This is the bargain Facebook wants to make.
I really do not want facebook's new world.
That is a really scary thing to have read. Perhaps the New York Times is out of line in using it, but if that metaphor is even 10% accurate that would be very bad.
To give some background, the Scramble for Africais the only time Ive ever read the words, where the writer had a serious argument, was worse than the holocaust. This was in reference to the mass deaths in the Congo under King Leopold of Belgium, as documented in the book King Leopolds Ghost.
I know a ton of people have died in history and there have been so many wars, but the Scramble for Africa was really really really bad.
Yeah so I guess in conclusion, NY Times shouldnt have used that phrase, Facebook sucks, and if they (Facebook) mess up almost everywhere please please just not let them mess up the African continent.
> This project is licensed under the NANOME VR PRODUCT SUITE
Digging a bit, it appears this is funded by an ICO, or at least created by a company currently running an ICO? Too bad that basically short-circuits to "smells fishy" to me right now; hopefully they can pull through and build a track record of credibility.
Someone should make a VR version for mechanical motions like that How to Make a Car course that was posted here a while ago
If EE/CS departments of colleges adopt RISC V hardware for teaching their students, providing cheap microcontrollers and boards to students at the start of their semester classes, those precocious little buggers are going to build Doom clones and help port their favorite flavor of linux onto them. When you've got a generation of top talent tinkering with an ISA that doesn't suck like x86, you're going to see adoption in actual industry.
The ecosystem is starting to pull industry players.
Aside from GroupGets and MassDrop, who else is operating in the crowd-funded, discounted bulk-purchase collaborative.. consumption.. space?
I'm still a little unclear on RISC-V's goals - are they looking at the microcontroller market or are they looking more to offer an alternative for ARM and x86 CPUs?
In the microcontroller market there's a lot of competition right now, especially with devices like the ESP32 going for $8 with wifi and bluetooth.