Don’t Invest in Bitcoin Code, Bitcoin Doubler or Bitcoin ...

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Why Osana takes so long? (Programmer's point of view on current situation)

I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear.
Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one.
On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
  1. else if's and lack any sort of refactoring in general
The most «memey» one. I won't talk about the performance though (switch statement is not better in terms of performance, it is a myth. If compiler detects some code that can be turned into a jump table, for example, it will do it, no matter if it is a chain of if's or a switch statement. Compilers nowadays are way smarter than one might think). Just take a look here. I know that it's his older JavaScript code, but, believe it or not, this piece is still present in C# version relatively untouched.
I refactored this code for you using C language (mixed with C++ since there's no this pointer in pure C). Take a note that else if's are still there, else if's are not the problem by itself.
The refactored code is just objectively better for one simple reason: it is shorter, while not being obscure, and now it should be able to handle, say, Trespassing and Blood case without any input from the developer due to the usage of flags. Basically, the shorter your code, the more you can see on screen without spreading your attention too much. As a rule of thumb, the less lines there are, the easier it is for you to work with the code. Just don't overkill that, unless you are going to participate in International Obfuscated C Code Contest. Let me reiterate:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier.
Is it too late now to start refactoring? Of course NO: better late than never.
  1. Comments
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you.
I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day.
Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
  1. Unit testing
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed.
Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
  1. Static code analysis
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here.
Is it too late now to hook up static code analyzer? Obviously NO.
  1. Code architecture
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely.
A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog.
I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture.
Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
  1. Loading times
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
  1. Fix the code (unavoidable time loss)
  2. Rebuild the project (can take a loooong time)
  3. Load your game (can take a loooong time)
  4. Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community.
Is it too late to reduce loading times? Hell NO.
  1. Jenkins
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera.
In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator.
Is it too late to add continuous integration? NO, albeit it is going to take some time and skills to set up.
  1. Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
  1. Go outside
Enough said. Your brain works slower if you only think about games and if you can't provide it with enough oxygen supply. I know that this one is probably the hardest to implement, but…
That's all, folks.
Bonus: Do you think how short this list would have been if someone just simply listened to Mike Zaimont instead of breaking down in tears?
submitted by Dezhitse to Osana [link] [comments]

Our first generation hardware wallets were made of military-grade aerospace aluminum. We’ve stripped all that down to just focus on air-gapping your private keys.

Our first generation hardware wallets were made of military-grade aerospace aluminum. We’ve stripped all that down to just focus on air-gapping your private keys.

https://preview.redd.it/0rogeunfujv41.png?width=1024&format=png&auto=webp&s=8a2cf5eff6f30a36fd7e86e16331eb40b4072627
Hey bitcoin! I'm Lixin, longtime bitcoiner and creator of Cobo Vault.
I come from a background in the electronic hardware industry, and experienced one of my products being featured in Apple Stores around the world. Back in 2018 Cobo CEO Discus Fish, who also co-founded F2Pool, invited me to help build Cobo’s hardware product line. As we had strong ties to miners in China, we naturally designed the 1st gen with them in mind. In China, mining farms are nearly always built in very isolated places where there is very cheap wind or water electricity. When we built our 1st generation Cobo Vault hardware wallet, we needed to maximize the durability of the device in addition to its security. We used aerospace aluminum rather than plastic and made it completely IP68 waterproof, IK9 drop resistant, and military standard MIL-STD-810G durable for the mining industry.
Things changed last year when I went to Bitcoin 2019 and talked to lots of hodlers in the States. I found that 95% of them don’t care about durability. I asked them if they were afraid of their home being flooded or burned down in a fire. The answer is - yes, they are afraid of these things, but see them as very low possibilities. Even if something were to happen, they said they would just buy another HW wallet for 100 dollars. From these conversations, it became more and more clear we should design a product around a normal hodler’s needs.
Our 2nd gen product compromises on durability but doesn’t compromise on security.
Most hodlers share some needs with miners:
  1. Hodlers want a more air-gapped solution so we kept QR code data transmission between your hardware wallet and the companion app which is also auditable.
  2. A Secure Element is the strongest wall of protection from physical attacks. We are the first hardware wallet - also maybe the first electronic product with SE - to have open source SE firmware.
  3. A battery can be a significant weak point. The 2nd gen continues the legacy of detachable batteries to prevent corrosion damage and will also support AAA batteries in case your battery dies someday.
  4. The 2nd gen also keeps the 4-inch touchscreen so you don’t need to suffer from tiny buttons and little screens anymore. Human error is one of the biggest reasons people lose their assets.
  5. We kept other features like the self-destruct mechanism and Web Authentication, which prevent side-channel and supply chain attacks.
If you'd like to read more about these features, check out our blog posts.
Aside from the legacy of the 1st gen, our 2nd gen product will have:
  1. Open source hardware wallet application layer and Secure Element firmware code. With the open source firmware code, you can see: random number generation, master private key generation, key derivation, and the signing process all happen within the SE and your private keys never leave.
  2. At the Bitcoin 2019 conference half the hodlers I met told me they own multiple hardware wallets which they use on the go. We added a fingerprint sensor you can use to authorize transactions without typing in your password. No need to worry about surveillance cameras when using your hardware wallet in airports.
  3. We will also support PSBT (BIP174) to be compatible with third-party wallets like Electrum or Wasabi Wallet in case people have need of using Cobo Vault with their own node or coinjoin. Multisig between Cobo Vault and other wallets will be realized to prevent single point failure with any brand of hardware wallet.
  4. By sacrificing the durability, we successfully controlled the price under 100 USD for the basic version.
  5. BTC-only firmware version for people who want to minimize the codebase for less of an attack surface.
We truly appreciate the support from the community and are giving away free metal storage Cobo Tablets with every purchase of our 2nd gen for a week! Add a tablet to your cart and place your order before May 5th, 8 AM PST to claim your free metal storage. Find us on Twitter CryptoLixin and CoboVault - any suggestions or questions are welcome!
submitted by Bright_Charge to Bitcoin [link] [comments]

Reddcoin (RDD) Core Wallet Release - v3.10.0rc4 Core Staking (PoSV v2) Wallet including MacOS Catalina and more!

https://github.com/reddcoin-project/reddcoin/releases/tag/v3.10.0rc4
Reddcoin (RDD) Core Dev team releases v3.10.0rc4 Core Wallet.
Includes full MacOS Catalina support, Bitcoin 0.10 codebase features, security and other enhancements. Full changelog available on github, complete release notes to be published with full 3.10.0 release anticipated shortly.
NOTE: This v3.10.0rc4 code is pre-release, but may be used on mainnet for normal operations. This final "release candidate" version addresses an issue identified where the individual posv v2 stake transaction could be modified such that no funds went to the developer. - See Issue #155 for description. Also includes additional components of enhanced build system; Travis continuous integration (CI) and Transifex translations. Prerelease v3.10.0rc4 binary code is not certificate signed.
To assist in translations, correct text, or add languages, please join the following team: https://www.transifex.com/reddcoin/reddcoin/qt-translation-v310/ To assist in other aspects of the Reddcoin project, please contact TechAdept or any member of the team.
Bootstrap (zipped folder of blockchain, date of upload 5-1-20) may be downloaded here if required: https://drive.google.com/file/d/1ItVFGiDyIH5SfCNhfrj29Qavg8LWmfZy/view?usp=sharing
Commits included since rc3:
2a8c7e6 Preparations for 3.10.0 rc4 4a6f398 Update translations 7aa5151 build: update reference time to something more recent 1a65b8c Update translations d4a1ca6 transifex: update translation instructions a03895b transifex: update config for this release 51ad1e0 move check before supermajority reached 794680f Make check for developer address when receiving block 457503e travis: Remove group: legacy 97d3a2a travis: Remove depreciated sudo flag 21dcfa6 docs: update release notes 7631aac update error messages 5b41e31 check that the outputs of the stake are correct. 9bd1820 travis: test with wallet enabled 55f2dd5 fix reference to Reddcoin 220f404 travis: disable libs for windows builds (temp) b044e0f depends: qt update download source path 2fe2d85 depends: set new download source 4cf531e remove duplicated entry 0d8d0da travis: diable tests e13ad81 travis: manually disable sse2 support for ARM processors 1f62045 travis: fix crash due to missing (and not required) package 0fb3b75 travis: update path 9d6a642 docs: update travis build status badge with correct path
https://github.com/reddcoin-project/reddcoin/releases/tag/v3.10.0rc4
submitted by TechAdept to reddCoin [link] [comments]

Transcript of Bitcoin ABC’s Amaury Sechet presenting at the Bitcoin Cash City conference on September 5th, 2019

Transcript of Bitcoin ABC’s Amaury Sechet presenting at the Bitcoin Cash City conference on September 5th, 2019
I tried my best to be as accurate as possible, but if there are any errors, please let me know so I can fix. I believe this talk is important for all Bitcoin Cash supporters, and I wanted to provide it in written form so people can read it as well as watch the video: https://www.youtube.com/watch?v=uOv0nmOe1_o For me, this was the first time I felt like I understood the issues Amaury's been trying to communicate, and I hope that reading this presentation might help others understand as well.
Bitcoin Cash’s Culture
“Okay. Hello? Can you hear me? The microphone is good, yeah?
Ok, so after that introduction, I’m going to do the only thing that I can do now, which is disappoint you, because well, that was quite something.
So usually I make technical talks and this time it’s going to be a bit different. I’m going to talk about culture in the Bitcoin Cash ecosystem. So first let’s talk about culture, like what is it? It’s ‘the social behaviors and norms found in human society.’
So we as the Bitcoin Cash community, we are a human society, or at least we look like it. You’re all humans as far as I know, and we have social behaviors and norms, and those social behaviors and norms have a huge impact on the project.
And the reason why I want to focus on that point very specifically is because we have better fundamentals and we have a better product and we are more useful than most other cryptos out there. And I think that’s a true statement, and I think this is a testimony of the success of BCH. But also, we are only just 3% of BTC’s value. So clearly there is something that we are not doing right, and clearly it’s not fundamental, it’s not product, it’s not usefulness. It’s something else, and I think this can be found somewhat in our culture.
So I have this quote here, from Naval Ravikant. I don’t know if you guys know him but he’s a fairly well known speaker and thinker, and he said, “Never trust anyone who does not annoy you from time to time, because it means that they are only telling you what you want to hear.”
And so today I am going to annoy you a bit, in addition to disappointing you, so yeah, it’s going to be very bad, but I feel like we kind of need to do it.
So there are two points, mainly, that I think our culture is not doing the right thing. And those are gonna be infrastructure and game theory. And so I’m going to talk a little bit about infrastructure and game theory.
Right, so, I think there are a few misconceptions by people that are not used to working in software infrastructure in general, but basically, it works like any other kind of infrastructure. So basically all kinds of infrastructure decay, and we are under the assumption that technology always gets better and better and better and never decays. But in terms of that, it actually decays all the time, and we have just a bunch of engineers working at many many companies that keep working at making it better and fighting that decay.
I’m going to take a few examples, alright. Right now if you want to buy a cathode ray tube television or monitor for your computer (I’m not sure why you want to do that because we have better stuff now), but if you want to buy that, it’s actually very difficult now. There are very little manufacturers that even know how to build them. We almost forgot as a human society how to build those stuff. Because, well, there was not as high of a demand for them as there was before, and therefore nobody really worked on maintaining the knowledge or the know how, and the factories, none of that which are required to build those stuff, and therefore we don’t build them. And this is the same for vinyl discs, right? You can buy vinyl disk today if you want, but it’s actually more expensive than it used to be twenty years ago.
We used to have space shuttles. Both Russia and US used to have space shuttles. And now only the US have space shuttles, and now nobody has space shuttles anymore.
And there is an even better counter example to that. It’s that the US, right now, is refining Uranium for nuclear weapons. Like on a regular basis there are people working on that problem. Except that the US doesn’t need any new uranium to make nuclear weapons because they are decommissioning the weapons that are too old and can reuse that uranium to build the new weapon that they are building. The demand for that is actually zero, and still there are people making it and they are just basically making it and storing it forever, and it’s never used. So why is the US spending money on that? Well you would say governments are usually pretty good at spending money on stuff that are not very useful, but in that case there is a very good reason. And the good reason is that they don’t want to forget how it’s done. Because maybe one day it’s going to be useful. And acquiring the whole knowledge of working with uranium and making enriched uranium, refining uranium, it’s not obvious. It’s a very complicated process. It involves very advanced engineering and physics, a lot of that, and keeping people working on that problem ensures that knowledge is kept through time. If you don’t do that, those people are going to retire and nobody will know how to do it. Right.
So in addition to decaying infrastructure from time to time, we can have zero days in software, meaning problems in the software that are not now exploited live on the network. We can have denial of service attack, we can have various failures on the network, or whatever else, so just like any other infrastructure we need people that essentially take care of the problem and fight the decay constantly doing maintenance and also be ready to intervene whenever there is some issue. And that means that even if there is no new work to be done, you want to have a large enough group of people that are working on that everyday just making it all nice and shiny so that when something bad happens, you have people that understand how the system works. So even if for nothing else, you want a large enough set of people working on infrastructure for that to be possible.
So we’re not quite there yet, and we’re very reliant on BTC. Because the software that we’re relying on to run the network is actually a fork to the BTC codebase. And this is not specific to Bitcoin Cash. This is also true for Litecoin, and Dash, and Zcash and whatever. There are many many crypotos that are just a fork of the Bitcoin codebase. And all those crypos they actually are reliant on BTC to do some maintenance work because they have smaller teams working on the infrastructure. And as a result any rational market cannot price those other currencies higher than BTC. It would just not make sense anymore. If BTC were to disappear, or were to fail on the market, and this problem is not addressed, then all those other currencies are going to fail with it. Right? And you know that may not be what we want, but that’s kind of like where we are right now.
So if we want to go to the next level, maybe become number one in that market, we need to fix that problem because it’s not going to happen without it.
So I was mentioning the 3% number before, and it’s always very difficult to know what all the parameters are that goes into that number, but one of them is that. Just that alone, I’m sure that we are going to have a lower value than BTC always as long as we don’t fix that problem.
Okay, how do we fix that problem? What are the elements we have that prevent us from fixing that problem? Well, first we need people with very specific skill sets. And the people that have experience in those skill sets, there are not that many of them because there are not that many places where you can work on systems involving hundreds of millions, if not billions of users, that do like millions of transactions per second, that have systems that have hundreds of gigabytes per second of throughput, this kind of stuff. There are just not that many companies in the world that operate on that scale. And as a result, the number of people that have the experience of working on that scale is also pretty much limited to the people coming out of those companies. So we need to make sure that we are able to attract those people.
And we have another problem that I talked about with Justin Bons a bit yesterday, that we don’t want to leave all that to be fixed by a third party.
It may seem nice, you know, so okay, I have a big company making good money, I’m gonna pay people working on the infrastructure for everybody. I’m gonna hire some old-time cypherpunk that became famous because he made a t-shirt about ERISA and i’m going to use that to promote my company and hire a bunch of developers and take care of the infrastructure for everybody. It’s all good people, we are very competent. And indeed they are very competent, but they don’t have your best interest in mind, they have their best interest in mind. And so they should, right? It’s not evil to have your own interest in mind, but you’ve got to remember that if you delegate that to others, they have their best interest in mind, they don’t have yours. So it’s very important that you have different actors that have different interests that get involved into that game of maintaining the infrastructure. So they can keep each other in check.
And if you don’t quite understand the value proposition for you as a business who builds on top of BCH, the best way to explain that to whoever is doing the financials of your company is as an insurance policy. The point of the insurance on the building where your company is, or on the servers, is so that if everything burns down, you can get money to get your business started and don’t go under. Well this is the same thing. Your business relies on some infrastructure, and if this infrastructure ends up going down, disappearing, or being taken in a direction that doesn’t fit your business, your business is toast. And so you want to have an insurance policy there that insures that the pieces that you’re relying on are going to be there for you when you need them.
Alright let’s take an example. In this example, I purposefully did not put any name because I don’t want to blame people. I want to use this as an example of a mistake that were made. I want you to understand that many other people have done many similar mistakes in that space, and so if all you take from what I’m saying here is like those people are bad and you should blame them, this is like completely the wrong stuff. But I also think it’s useful to have a real life example.
So on September 1st, at the beginning of the week, we had a wave of spam that was broadcasted on the network. Someone made like a bunch of transactions, and those were very visibly transactions that were not there to actually do transactions, they were there just to create a bunch of load on the network and try to disturb its good behavior.
And it turned out that most miners were producing blocks from 2 to 8 megabytes, while typical market demand is below half a megabyte, typically, and everything else above that was just spam, essentially. And if you ask any people that have experience in capacity planning, they are going to tell you that those limits are appropriate. The reason why, and the alternative to raising those limits that you can use to mitigate those side effects are a bit complicated and they would require a talk in and of itself to go into, so I’m going to just use an argument from authority here, but trust me, I know what I’m talking about here, and this is just like raising those limits is just not the solution. But some pool decided to increase that soft cap to 32 megs. And this has two main consequences that I want to dig in to explain what is not the right solution.
And the first one is that we have businesses that are building on BCH today. And those businesses are the ones that are providing value, they are the ones making our network valuable. Right? So we need to treat those people as first class citizens. We need to attract and value them as much as we can. And those people, they find themselves in the position where they can either dedicate their resources and their attention and their time to make their service better and more valuable for users, or maybe expand their service to more countries, to more markets, to whatever, they can do a lot of stuff, or they can spend their time and resources to make sure the system works not when you have like 10x the usual load, but also 100x the usual load. And this is something that is not providing value to them, this is something that is not providing value to us, and I would even argue that this is something that is providing negative value.
Because if those people don’t improve their service, or build new services, or expand their service to new markets, what’s going to happen is that we’re not going to do 100x. 100x happens because people provide useful services and people start using it. And if we distract those people so that they need to do random stuff that has nothing to do with their business, then we’re never going to do 100x. And so having a soft cap that is way way way above what is the usual market demand (32 megs is almost a hundred times what is the market demand for it), it’s actually a denial of service attack that you open for anyone that is building on the chain.
We were talking before, like yesterday we were asking about how do we attract developers, and one of the important stuff is that we need to value that over valuing something else. And when we take this kind of move, the signal that we send to the community, to the people working on that, is that people yelling very loudly on social media, their opinion is more valued than your work to make a useful service building on BCH. This is an extremely bad signal to send. So we don’t want to send those kind of signals anymore.
That’s the first order effect, but there’s a second order effect, and the second order effect is to scale we need people with experience in capacity planning. And as it turns out big companies like Google, and Facebook, and Amazon pay good money, they pay several 100k a year to people to do that work of capacity planning. And they wouldn’t be doing that if they just had to listen to people yelling on social media to find the answer. Right? It’s much cheaper to do the simple option, except the simple option is not very good because this is a very complex engineering problem. And not everybody is like a very competent engineer in that domain specifically. So put yourself in the shoes of some engineers who have skills in that particular area. They see that happening, and what do they see? The first thing that they see is that if they join that space, they’re going to have some level of competence, some level of skill, and it’s going to be ignored by the leaders in that space, and ignoring their skills is not the best way to value it as it turns out. And so because of that, they are less likely to join it. But there is a certain thing that they’re going to see. And that is that because they are ignored, some shit is going to happen, some stuff are going to break, some attacks are going to be made, and who is going to be called to deal with that? Well, it’s them. Right? So not only are they going to be not valued for their stuff, the fact that they are not valued for their stuff is going to put them in a situation where they have to put out a bunch of fires that they would have known to avoid in the first place. So that’s an extremely bad value proposition for them to go work for us. And if we’re going to be a world scale currency, then we need to attract those kinds of people. And so we need to have a better value proposition and a better signaling that we send to them.
Alright, so that’s the end of the first infrastructure stuff. Now I want to talk about game theory a bit, and specifically, Schelling points.
So what is a Schelling point? A Schelling point is something that we can agree on without especially talking together. And there are a bunch of Schelling points that exist already in the Bitcoin space. For instance we all follow the longest chain that have certain rules, right? And we don’t need to talk to each other. If I’m getting my wallet and I have some amount of money and I go to any one of you here and you check your wallet and you have that amount of money and those two amounts agree. We never talk to each other to come to any kind of agreement about how much each of us have in terms of money. We just know. Why? Because we have a Schelling point. We have a way to decide that without really communicating. So that’s the longest chain, but also all the consensus rules we have are Schelling points. So for instance, we accept blocks up to a certain size, and we reject blocks that are bigger than that. We don’t constantly talk to each other like, ‘Oh by the way do you accept 2 mb blocks?’ ‘Yeah I do.’ ‘Do you accept like 3 mb blocks? And tomorrow will you do that?’
We’re not doing this as different actors in the space, constantly worrying each other. We just know there is a block size that is a consensus rule that is agreed upon by almost everybody, and that’s a consensus rule. And all the other consensus rules are effectively changing Schelling points. And our role as a community is to create valuable Schelling points. Right? You want to have a set of rules that provide as much value as possible for different actors in the ecosystem. Because this is how we win. And there are two parts to that. Even though sometimes we look and it’s just one thing, but there are actually two things.
The first one is that we need to decide what is a valuable Schelling point. And I think we are pretty good at this. And this is why we have a lot of utility and we have a very strong fundamental development. We are very good at choosing what is a good Schelling point. We are very bad at actually creating it and making it strong.
So I’m going to talk about that.
How do you create a new Schelling point. For instance, there was a block size, and we wanted a new block size. So we need to create a new Schelling point. How do you create a new Schelling point that is very strong? You need a commitment strategy. That’s what it boils down to. And the typical example that is used when discussing Schelling points is nuclear warfare. So think about that a bit. You have two countries that both have nuclear weapons. And one country sends a nuke on the other country. Destroys some city, whatever, it’s bad. When you look at it from a purely rational perspective, you will assume that people are very angry, and that they want to retaliate, right? But if you put that aside, there is actually no benefit to retaliating. It’s not going to rebuild the city, it’s not going to make them money, it’s not going to give them resources to rebuild it, it’s not going to make new friends. Usually not. It’s just going to destroy some stuff in the other guy that would otherwise not change anything because the other guys already did the damage to us. So if you want nuclear warfare to actually prevent war like we’ve seen mostly happening in the past few decades with the mutually assured destruction theory, you need each of those countries to have a very credible commitment strategy, which is if you nuke me, I will nuke you, and I’m committing to that decision no matter what. I don’t care if it’s good or bad for me, if you nuke me, I will nuke you. And if you can commit to that strongly enough so that it’s credible for other people, it’s most likely that they are not going to nuke you in the first place because they don’t want to be nuked. And it’s capital to understand that this commitment strategy, it’s actually the most important part of it. It’s not the nuke, it’s not any of it, it’s the commitment strategy. You have the right commitment strategy, you can have all the nuke that you want, it’s completely useless, because you are not deterring anyone from attacking you.
There are many other examples, like private property. It’s something usually you’re going to be willing to put a little bit of effort to defend, and the effort is usually way higher than the value of the property itself. Because this is your house, this is your car, this is your whatever, and you’re pretty committed to it, and therefore you create a Schelling point over the fact that this is your house, this is your car, this is your whatever. People are willing to use violence and whatever to defend their property. This is effectively, even if you don’t do it yourself, this is what happens when you call the cops, right? The cops are like you stop violating that property or we’re going to use violence against you. So people are willing to use a very disproportionate response even in comparison to the value of the property. And this is what is creating the Schelling point that allows private property to exist.
This is the commitment strategy. And so the longest chain is a very simple example. You have miners and what miners do when they create a new block, essentially they move from one Schelling point when a bunch of people have some amount of money, to a new Schelling point where some money has moved, and we need to agree to the new Schelling point. And what they do is that they commit a certain amount of resources to it via proof of work. And this is how they get us to pay attention to the new Schelling point. And so UASF is also a very good example of that where people were like we activate segwit no matter what, like, if it doesn’t pan out, we just like busted our whole chain and we are dead.
Right? This is like the ultimate commitment strategy, as far as computer stuff is involved. It’s not like they actually died or anything, but as far as you can go in the computer space, this is very strong commitment strategy.
So let me take an example that is fairly inconsequential in its consequences, but I think explains very well. The initial BCH ticker was BCC. I don’t know if people remember that. Personally I remember reading about it. It was probably when we created it with Jonald and a few other people. And so I personally was for XBC, but I went with BCC, and most people wanted BCC right? It doesn’t matter. But it turned out that Bitfinex had some Ponzi scheme already listed as BCC. It was Bitconnect, if you remember. Carlos Matos, you know, great guy, but Bitconnect was not exactly the best stuff ever, it was a Ponzi scheme. And so as a result Bitifnex decided to list Bitcoin Cash as BCH instead of BCC, and then the ball started rolling and now everybody uses BCH instead of BCC.
So it’s not all that bad. The consequences are not that very bad. And I know that many of you are thinking that right now. Why is this guy bugging us about this? We don’t care if it’s BCC or BCH. And if you’re doing that, you are exactly proving my point.
Because … there are people working for Bitcoin.com here right? Yeah, so Bitcoin.com is launching an exchange, or just has launched, it’s either out right now or it’s going to be out very soon. Well think about that. Make this thought experiment for yourself. Imagine that Bitcoin.com lists some Ponzi scheme as BTC, and then they decide to list Bitcoin as BTN. What do you think would be the reaction of the Bitcoin Core supporter? Would they be like, you know what? we don’t want to be confused with some Ponzi scheme so we’re going to change everything for BTN. No, they would torch down Roger Ver even more than they do now, they would torch down Bitcoin.com. They would insult anyone that would suggest that this was a good idea to go there. They would say that everyone that uses the stuff that is BTC that it’s a ponzi scheme, and that it’s garbage, and that if you even talk about it you are the scum of the earth. Right? They would be extremely committed to whatever they have.
And I think this is a lesson that we need to learn from them. Because even though it’s a ticker, it’s not that important, it’s that attitude that you need to be committed to that stuff if you want to create a strong Schelling point, that allows them to have a strong Schelling point, and that does not allow us to have that strong of a Schelling point.
Okay, so yesterday we had the talk by Justin Bons from Cyber Capital, and one of the first things he said in his talk, is that his company has a very strong position in BCH. And so that changed the whole tone of the talk. You gotta take him seriously because his money is where his mouth is. You know that he is not coming on the stage and telling you random stuff that comes from his mind or tries to get you to do something that he doesn’t try himself. That doesn’t mean he’s right. Maybe he’s wrong, but if he’s wrong, he’s going bankrupt. And you know just for that reason, maybe it’s worth it to listen to it a bit more than some random person saying random stuff when they have no skin in the game.
And it makes him more of a leader in the space. Okay we have some perception in this space that we have a bunch of leaders, but many of them don’t have skin in the game. And it is very important that they do. So when there is some perceived weakness from BCH, if you act as an investor, you are going to diversify. If you act as a leader, you are going to fix that weakness. Right? And so, leaders, it’s not like you can come here and decide well, I’m a leader now. Leaders are leaders because people follow them. It seems fairly obvious, but … and you are the people following the leaders, and I am as well. We decide to follow the opinion of some people more than the opinion of others. And those are the defacto leaders of our community. And we need to make sure that those leaders that we have like Justin Bons, and make sure that they have a strong commitment to whatever they are leading you to, because otherwise you end up in this situation:

https://preview.redd.it/r23dptfobcl31.jpg?width=500&format=pjpg&auto=webp&s=750fbd0f1dc0122d2791accc59f45a235a522444
Where you got a leader, he’s getting you to go somewhere, he has some goal, he has some whatever. In this case he is not that happy with the British people. But he’s like give me freedom or give me death, and he’s going to fight the British, but at the same time he’s like you know what? Maybe this shit isn’t gonna pan out, you gotta make sure you have your backup plan together, you have your stash of British pound here. You know, many of us are going to die, but that’s a sacrifice I’m willing to make.
That’s not the leader that you want.
I’m going to go to two more examples and then we’re going to be done with it. So one of them is Segwit 2x. Segwit 2x came with a time where some people wanted to do UASF. And UASF was essentially people that set up a modified version of their Bitcoin node that would activate segwit on August 1, no matter what. Right? No matter what miners do, no matter what other people do, it’s going to activate segwit. And either I’m going to be on the other fork, or I’m going to be alone and bust. Well, the alternative proposal was segwit 2x. Where people would activate segwit and then increase the size of the block. And what happened was that one of the sides had a very strong commitment strategy, and the other side, instead of choosing a proportional commitment strategy, what they did was that they modified the activation of segwit 2x to be compatible with UASF. And in doing so they both validate the commitment strategy done by the opposite side, and they weaken their own commitment strategy. So if you look at that, and you understand game theory a bit, you know what’s going to happen. Like the fight hasn’t even started and UASF has already won. And when I saw that happening, it was a very important development to me, because I have some experience in game theory, a lot of that, so I understood what was happening, and this is what led me to commit to BCH, which was BCC at the time, 100%. Because I knew segwit 2x was toast, even though it had not even started, because even though they had very strong cards, they are not playing their cards right, and if you don’t play your cards right, it doesn’t matter how strong your cards are.
Okay, the second one is emergent consensus. And the reason I wanted to put those two examples here is because I think those are the two main examples that lead to the fact that BTC have small blocks and we have big blocks and we’re a minority chain. Those are like the two biggest opportunities we had to have big blocks on BTC and we blew both of them for the exact same reason.
So emergent consensus is like an interesting technology that allows you to trade your bigger block without splitting the network. Essentially, if someone starts producing blocks that are bigger than … (video skips) ,,, The network seems to be following the chain that has larger blocks, eventually they’re going to fall back on that chain, and that’s a very clevery mechanism that allows you to make the consensus rules softer in a way, right? When everybody has the same consensus rules, it still remains enforced, but if a majority of people want to move to a new point, they can do so by bringing others with them without creating a fork. That is a very good activation mechanism for changing the block size, for instance, or it can be used to activate other stuff.
There is a problem, though. This mechanism isn’t able to set a new point. It’s a way to activate a new Schelling point when you have one, but it provides no way to decide when and where or to what value or to anything to where we are going. So this whole strategy lacks the commitment aspect of it. And because it lacks the commitment aspect of it, it was unable to activate properly. It was good, but it was not sufficient in itself. It needs to be combined with a commitment strategy. And especially on that one there are some researchers that wrote a whole paper (https://eprint.iacr.org/2017/686.pdf) unpacking the whole game theory that essentially come to that conclusion that it’s not going to set a new size limit because it lacked the commitment aspect of it. But they go on like they model all the mathematics of it, they give you all the numbers, the probability, and the different scenarios that are possible. It’s a very interesting paper. If you want to see, like, because I’m kind of explaining the game theory from a hundred mile perspective, but actually you can deep dive into it and if you want to know the details, they are in there. People are doing that. This is an actual branch of mathematics.
Alright, okay so conclusion. We must avoid to weaken our commitment strategy. And that means that we need to work in a way where first there is decentralization happening. Everybody has ideas, and we fight over them, we decide where we want to go, we put them on the roadmap, and once it’s on the roadmap, we need to commit to it. Because when people want to go like, ‘Oh this is decentralized’ and we do random stuff after that, we actually end up with decentralization, not decentralization in a cooperative manner, but like in an atomization manner. You get like all the atoms everywhere, we explode, we destroy ourself.
And we must require a leader to have skin in the game, so that we make sure we have good leaders. I have a little schema to explain that. We need to have negotiations between different parties, and because there are no bugs, the negotiation can last for a long time and be tumultuous and everything, and that’s fine, that’s what decentralization is looking like at that stage, and that’s great and that makes the system strong. But then once we made a decision, we got to commit to it to create a new Schelling point. Because if we don’t, the new Schelling point is very weak, and we get decentralization in the form of disintegration. And I think we have not been very good to balance the two. Essentially what I would like for us to do going forward is encouraging as much as possible decentralization in the first form. But consider people who participate in the second form, as hostile to BCH, because their behavior is damaging to whatever we are doing. And they are often gonna tell you why we can’t do that because it’s permissionless and decentralized, and they are right, this is permissionless and decentralized, and they can do that. We don’t have to take it seriously. We can show them the door. And not a single person can do that by themself, but as a group, we can develop a culture where it’s the norm to do that. And we have to do that.”
submitted by BCHcain to btc [link] [comments]

We’ve been working on a new product release for a year and want to hear your opinions on the product. Read on for product information and our vision for hardware wallets.

TL;DR Key features of Cobo Vault 2nd gen we are going to launch:
Hey bitcoin! I'm Lixin, a longtime Bitcoiner and creator of Cobo Vault.
I come from a background in the electronic hardware industry, and experienced one of my products being featured in Apple Stores around the world. Although my interest goes back to 2010, my career intersected Bitcoin when Discus Fish (CEO of Cobo) invited me to help build Cobo’s hardware product line. Discus Fish is also the co-founder and CEO of f2pool, one of the largest mining pools currently in the world, and one of the earliest advocates of bitcoin in China.
Back in 2018 we built our 1st generation Cobo Vault hardware wallet. As we had strong ties to miners in China, we naturally designed the 1st gen with them in mind. For those who are not familiar with the mining industry in China, mining farms are nearly always built in very isolated places where there is very cheap wind or water electricity. As the miners would take their storage into these isolated regions, we needed to maximize the durability of the device in addition to its security. We used aerospace aluminum rather than plastic and made it completely IP68 waterproof. We also gave it a hardshell metal case you can put it in, which is IK9 drop resistant and passes the American military durability test MIL-STD-810G.
As for the electronic components inside the device, in order to maximize security, we made it as air-gapped as possible with QR codes. We see this as an important choice because USB cables and Bluetooth are not transparent and have a bigger attack surface. With QR codes you can see exactly what is going on and do not have to connect to a laptop which could have malware on it. QR code interaction needs a camera and a more complicated system which needs to be supported by high-level chips.
All these come with a cost, and the 1st generation isn’t as accessible for average hodlers. For more details on the product, visit here.
Things changed last year when I went to Bitcoin 2019 and talked to lots of hodlers in the States. I found that 95% of them don’t care about durability. I asked them if they were afraid of their home being flooded or burned down in a fire. The answer is - yes, they are afraid of these things, but see them as very low possibilities. Even if something were to happen, they said they would just buy another HW wallet for 100 dollars. From these conversations, it became more and more clear that the position for miners and hodlers is totally different.
After coming back from that conference, our team began the almost one year journey of designing our 2nd gen product. It compromises on durability but doesn’t compromise on security.
We designed the 2nd gen product all around a normal hodler’s needs.
Obviously hodlers share some common needs with miners:
If you'd like to read more about these features, check out our blog posts here.
Aside from these legacies from the 1st gen, our 2nd gen product will have some other big improvements:
Personally, I am a bitcoin maximalist and also a big fan of the KISS principle. We will also release a BTC-only firmware version for people who want to minimize the codebase for less of an attack surface.
Thank you for reading until here. More details like final price would be released later when we officially release the product in late Apr. Any suggestions or questions are welcome. Also you can find me @CryptoLixin or @CoboVault on Twitter! Ears are widely open!
submitted by Bright_Charge to Bitcoin [link] [comments]

WARNING: Bitcoin Cash May Introduce Fatal Errors

Hi All,
I am long-term Bitcoin enthusiast and a core developer of PascalCoin, an infinitely scalable and completely original cryptocurrency (https://www.pascalcoin.org). I am also the developer of BlockchainSQL.io, an SQL-backend for Bitcoin.
I have been involved in Bitcoin community for a long time, and was a big supporter of hard-forking on Aug 1 2017 (https://redd.it/6i5qt1).
Due to the recent alarming proposals and the method which they are being pushed, I feel I have a moral duty to speak out to warn against what could be fatal technical errors for BCH.
As a full-time core developer at PascalCoin for last 18 months, I have dealt with DoS attacks, 51% attacks, timewarp attacks, mining centralisation attacks, out-of-consensus bugs, high-orphan rates and various other issues. Suffice to say, Layer-1 cryptocurrency development is hard and you don't really appreciate how fragile everything this until you work on a cryptocurrency codebase and manage a live mainnet (disclaimer: Albert Molina is main genius here, but it is a team effort).
Infinite Block Size: I know there has been much discussion here about the safety of "big blocks", and I generally agree with those arguments. However, the analysis I've seen always assumes the attackers are economically rational actors. On that basis, yes, the laws of economics will incentivise miners to naturally regulate the size of minted blocks. However, this does not include "economically irrational actors" such as competing coins, governments, banks, etc.
Allowing the natural limit of 32mb I think was a sensible move, but adding changes to the network protocol to allow 128mb blocks and then more, does not seem appropriate right now since:
It makes much more sense to leave the blocksize at 32mb until blocks reach ~16mb at which point the technical, security and reliability issues can be better understood and a more informed decision can be made by the BCH community.
Re-Enabling Opcodes: It's important to remember that these opcodes were disabled by Satoshi Nakamoto himself early on in the project due to ongoing bugs and instability arising out of the scripting engine (https://en.bitcoin.it/wiki/Common_Vulnerabilities_and_Exposures).
Later as the scripts became standardized, this issue was forgotten/abandoned since it would require a hard-fork to reactivate them and Core developers were against HF's. Personally, I think it's a good idea to re-enable them, but only after:
Infinite Script Size: One of the proposals I've seen that compliments re-enabling opcodes is to enable unbounded script sizes. From local discussions I've had with people promoting this idea, the "belief" is that miners will auto-regulate these as well. However, this is unproven.
Unbounded script-size introduce signficant attack-vectors in the areas of denial of service and stack/memory overflow (especially with all opcodes). One attack I can foresee here is the introduction of quadratic-hashing attack but inside a single transaction!
You have to understand that Ethereum had this problem from the onset and this is why they introduced the concept of "GAS". CPU power is a limited resource and if you don't pay for it, it will be completely abused. From what I've seen, there is no equivalent to GAS inside this proposal.
To understand the seriousness of this issue, think back to Ethereum's network instability before the DAO hacker. It went through many periods of DoS attacks as hackers cleverly found oversights in their opcode/EVM engine. This is a serious, proven and real-world attack-vector and not one to be "solved later". The BCH network could be brought to a grinding halt and easily with unbounded script sizes that do not pay any gas.
Voting/Signaling/Testnet: Even at PascalCoin, we go through a process of voting to enable all changes (https://www.pascalcoin.org/voting). We are barely a 10mill mcap coin and yet show more discipline with Voting, well-defined PIP design guidelines and Testnet releases. There is no excuse for BCH! It is a multi-billion dollar network and changes of this magnitude cannot be released so recklessly in such short time-frames.
I hope these comments are considered by stakeholders of BCH and the community at large. I am not a maximalist and support BCH, but the last week has revealed there is a serious technical void in BCH! The Bitcoin Core devs may not know much about economics, but they did know some things about security & reliability of cryptocurrency software.
PLEASE REMEMBER THERE ARE EXTREMELY TALENTED AND VICIOUS ATTACKERS OUT THERE and you need to be very careful with changes of this magnitude.
submitted by HermanSchoenfeld to btc [link] [comments]

So-called "Poison Blocks" (what Greg Maxwell called the "big block attack") are the way Bitcoin was designed to scale and the ONLY way it ever can

Sounds insane, right? Not if you realize Bitcoin works only because it is an economic system. Everything in Bitcoin that falls under the purview of cutthroat market competition works, and everything that doesn't, doesn't.
The error here is this is seen as a reason not to lift the cap. "We cannot raise the cap or miners would be forced to do work!" This is stated un-ironically, with no awareness that some miners being left behind and some miners making it is exactly how Bitcoin always had to work.
This is a cry to leave node code optimization out of the purview of cutthroat market competition, because apparently some believe that "cutthroat" has something to do with the result -- the kind of socialist mindset that thinks cutthroat competion among seatbelt makers would lead to seatbelts that kill you. Anyone who understands economics knows nothing could be further from the truth.
The rallying cry of the Core-style socialist mentality is that "Node code is too important to be left to the market, we need good Samaritan devs to provide it for all miners so that no miner is left behind."
The ultimate result of shielding men from the effects of folly, is to fill the world with fools. -Herbert Spencer
Likewise, the ultimate result of shielding miners from their inability or unwilliness to suitably optimize their node software is to fill Bitcoin with unprofessional miners who can't take us to global adoption.
Without the incentive to upgrade networking and codebase, Bitcoin lacks the crucial vetting process that Bitcoin need in order to distill miners into a long tail of professionals who have what it takes to ride this train all the way to a billion users, quickly and securely.
I challenge anyone to describe how they think Bitcoin can professionalize as long as there remains an effective subsidy for laggard miners in the areas of networking and node optimization (not meaning protocol optimization, but rather things like parallel validation). As painful as it may seem, the only way Bitcoin scales is over the bankrupt shells of many miners who didn't have what it takes. The cruft cannot come along for the ride.
This means orphan battles, even if just a little at a time. It means stress tests of rapidly increasing scale. While killing off too much hashpower too fast is in no one's interest (hahsrate gets too low), moving at a speed that is fast yet manageable by most big-league pros is. And really, the changes that need to be made aren't even reputed by anyone to be incredibly hard problems once you accept, as Satoshi did, that "it ends in datacentres and big server farms."
The fact that people are still arguing against 128MB by referencing tests with laptop nodes suggests that's the real problem here. Core's full node religion still has sway, despite being manufactured from whole cloth. Also known as Blockstream Syndrome, as a play off Stockholm Syndrome (where captives begin to sympathize with their captors).
Whatever the reasons given, critics of removing the cap invariably appeal to the infrastructure "not being ready" as if that were a bad thing. It's a good thing!
First of all, if we were to wait for all miners to be ready, we would be waiting for far too long. The right approach, to be determined by the market, is to move ahead somewhere between when 51% are ready and say 90% are ready, which is exactly what we can expect to happen without a cap. The incentives are such that it it profitable to sheer away some laggard miners but not too many (as culling too many at a time leaves BCH open to hashpower attack by BTC miners; over the longer term though it incentivizes pros to enter and take the place of the failed miners, making BCH even more secure).
Secondly, the idea of a monolithic "infrastructure" ignores the secret sauce that makes Bitcoin work: miners in competition. Some are expected to fail to be ready! If not, how can Bitcoin miners get any more professional? Only the removal or reformation of the laggards can ever ensure Bitcoin ends up with professional infrastructure.
This vetting process is inevitable and essential, and it must apply to all aspects of Bitcoin that we want to see professionalized, including node software.
Now leaving aside a miner filling his block with his own 0-fee transactions (which can be dealt with by other miners rejecting blocks with too many 0-fee txs of low coin age*), Greg Maxwell's "big block attack" where big miners try to terrorize smaller (less well capitalized) miners using oversized blocks that a sizable minority of the network can't handle due to their slow networking is in fact exactly how Bitcoin MUST scale.
It's not an attack, it's a stress test, and one Bitcoin literally cannot scale without. What he called an attack is the solution to scaling, not any kind of problem. Stress tests are incentivized in Bitcoin as a way of calling the bluff of the lazy miners. You gamble some money on an "attack," see who the slowpokes are and take their block rewards for your own.
No miners had the balls to do this so far, but they will soon or Bitcoin dies due to the halvings in a few more years, as fee volume won't sustain security. As big blockers said to Core, there no room for arbitrary "conservatism" in the face of an oncoming train.
Finally, I leave you with a thought experiment. Imagine somehow the community of volunteer developers in Bitcoin was so incredibly generous that it offered all miners ASIC designs, mining pool software, and all manner hashing optimizations to the point that miners merely had to buy ASICs and plug them in with no need to understand anything at all, and no need to try innovating on their own with ASIC design since these incredibly skilled volunteers trumped everything they could possibly come up with. Now naturally this situation must eventually come to an end, as the real pros step in, like Samsung.
With security thereby left out of the purview of cutthroat market competition, thanks to overweening volunteerism that continued for too long (no problem with volunteers at the start, just a child isn't born into the world an adult and needs parenting at first), these miners would be wholly unvetted, unprepared, unable to scale up their hashing operations and be obliterated by Samsung or maybe a government 51% attack to kill Bitcoin.
The point here is there is a formative period, and then there is adulthood. Growing up is a process of relying less and less on handouts, being exposed more and more to the cutthroat realities of the world. When is Bitcoin going to grow up? The halvings place a time limit on Bitcoin's security, and overprotective parents (those who don't want to remove the cap) -- in an ostensible effort to be conservative -- may end up keeping Honeybadger holed up his figurative mom's basement too long for him to accomplish his mission.
*and if your response is, "This doesn't exist yet in any clients," I think you have missed the point of this post: again, that's a good thing. Let miners who are too incompetent to figure out something that simple get sloughed away. Do we really want such sluggards? If so and you're a dev, volunteer some code to them. If not, try to get hired by them instead. I think the pay will be much better.
And if your response is, "But that means some miners might get orphaned unexpectedly and cry foul," then once again I say, that's a good thing. Block creation is fundamentally a speculative process. In other words, it's a gamble, by design. It's a Keynesian beauty contest wherein each miner tries to mine the greediest block they can get away with while not upping their orphan risk appreciably. Messing around with low-coin-age 0-fee tx stuffing might get you orphaned, boo-hoo. Miners are under no obligation to tell other miners their standards for block beauty in advance, even though they typically have done so thus far. Miners are ALWAYS free to orphan a block for ANY reason. That they generally keep to consistent, well-broadcast rules is a courtesy, not a necessity. Preventing general assholery isn't necessarily best effected by being up-front about what you will punish, but even if it is, miners can do that, too (let them figure it out, as they do for hashpower -- unless you have a good argument for why there is no possible solution or the solution is necessary too hard for a professional organization to figure out in reasonable time; that's the bar for objection, not "well the volunteer dev code doesn't do this yet").
And if your response is, "That will increase the orphan rate," yes and orphans already happen routinely so it is certainly not any catastrophe. See it as a detox process. It might put some small strain on the network as the slowpokes and dickheads are smacked, but again miners still choose this level of orphaning as well by the same Keynesian-beauty-contest dynamic. Orphans are a key part of why Bitcoin works and why it can scale, but if the orphan rate would interfere with service too much (unlikely if you believe 0-conf works), that also gets taken into account in the beauty contest and gets balanced with the benefits of punishing bad behavior and the costs of stomaching the poison block. The offending miner can also be un-whitelisted, returned to rando-node status, but again why are we trying to coddle miners by coming up with their strategies for being better professionals for them? Hopefully it is clear by now that all such arguments are central planning, which is bad at least after an early parental phase which I think has long since passed its natural life.
submitted by ratifythis to btc [link] [comments]

Emergent Coding FAQ

Background reading
  1. https://youtu.be/-MMQUspVduo ELI5 with pictures.
  2. https://youtu.be/ZSkZxOJ5HPA Hello World using Emergent Coding
  3. https://codevalley.com/whitepaper.pdf This document treats Emergent coding from a philosophical perspective. It has a good introduction, description of the tech and is followed by two sections on justifications from the perspective of Fred Brooks No Silver Bullet criteria and an industrialization criteria.
  4. Mark Fabbro's presentation from the Bitcoin Cash City Conference which outlines the motivation, basic mechanics, and usage of Bitcoin Cash in reproducing the industrial revolution in the software industry.
  5. Building the Bitcoin Cash City presentation highlighting how the emergent coding group of companies fit into the adoption roadmap of North Queensland.
  6. Forging Chain Metal by Paul Chandler CEO of Aptissio, one of startups in the emergent coding space and which secured a million in seed funding last year.
  7. Bitcoin Cash App Exploration A series of Apps that are some of the first to be built by emergent coding and presented, and in the case of Cashbar, demonstrated at the conference.
  8. A casual Bitcoin Cash interview that touches on emergent coding, tech park, merchant adoption and much more.
How does Emergent Coding prevent developer capture?
A developer's Agent does not know what project they are contributing to and is thus paid for the specific contribution. The developer is controlling the terms of the payment rather than the alternative, an employer with an employment agreement.
Why does Emergent Coding use Bitcoin BCH?
  1. Both emergent coding and Bitcoin BCH are decentralized: As emergent coding is a decentralized development environment consisting of Agents providing respective design services, each contract received by an agent requires a BCH payment. As Agents are hosted by their developer owners which may be residing in one of 150 countries, Bitcoin Cash - an electronic peer-to-peer electronic cash system - is ideal to include a developer regardless of geographic location.
  2. Emergent coding will increase the value of the Bitcoin BCH blockchain: With EC, there are typically many contracts to build an application (Cashbar was designed with 10000 contracts or so). EC adoption will increase the value of the Bitcoin BCH blockchain in line with this influx of quality economic activity.
  3. Emergent coding is being applied to BCH software first: One of the first market verticals being addressed with emergent coding is Bitcoin Cash infrastructure. We are already seeing quality applications created using emergent coding (such as the HULA, Cashbar, PH2, vending, ATMs etc). More apps and tools supporting Bitcoin cash will attract more merchants and business to BCH.
  4. Emergent coding increases productivity: Emergent coding increases developer productivity and reduces duplication compared to other software development methods. Emergent coding can provide BCH devs with an advantage over other coins. A BCH dev productivity advantage will accelerate Bitcoin BCH becoming the first global currency.
  5. Emergent coding produces higher quality binaries: Higher quality software leads to a more reliable network.

1. Who/what is Code Valley? Aptissio? BCH Tech Park? Mining and Server Complex?
Code Valley Corp Pty Ltd is the company founded to commercialize emergent coding technology. Code Valley is incorporated in North Queensland, Australia. See https://codevalley.com
Aptissio Australia Pty Ltd is a company founded in North Queensland and an early adopter of emergent coding. Aptissio is applying EC to Bitcoin BCH software. See https://www.aptissio.com
Townsville Technology Precincts Pty Ltd (TTP) was founded to bring together partners to answer the tender for the Historic North Rail Yard Redevelopment in Townsville, North Queensland. The partners consist of P+I, Conrad Gargett, HF Consulting, and a self-managed superannuation fund(SMSF) with Code Valley Corp Pty Ltd expected to be signed as an anchor tenant. TTP answered a Townsville City Council (TCC) tender with a proposal for a AUD$53m project (stage 1) to turn the yards into a technology park and subsequently won the tender. The plan calls for the bulk of the money is to be raised in the Australian equity markets with the city contributing $28% for remediation of the site and just under 10% from the SMSF. Construction is scheduled to begin in mid 2020 and be competed two years later.
Townsville Mining Pty Ltd was set up to develop a Server Complex in the Kennedy Energy Park in North Queensland. The site has undergone several studies as part of a due diligence process with encouraging results for its competitiveness in terms of real estate, power, cooling and data.
  1. TM are presently in negotiations with the owners of the site and is presently operating under an NDA.
  2. The business model calls for leasing "sectors" to mining companies that wish to mine allowing companies to control their own direction.
  3. Since Emergent Coding uses the BCH rail, TM is seeking to contribute to BCH security with an element of domestic mining.
  4. TM are working with American partners to lease one of the sectors to meet that domestic objective.
  5. The site will also host Emergent Coding Agents and Code Valley and its development partners are expected to lease several of these sectors.
  6. TM hopes to have the site operational within 2 years.
2. What programming language are the "software agents" written in.
Agents are "built" using emergent coding. You select the features you want your Agent to have and send out the contracts. In a few minutes you are in possession of a binary ELF. You run up your ELF on your own machine and it will peer with the emergent coding and Bitcoin Cash networks. Congratulations, your Agent is now ready to accept its first contract.
3. Who controls these "agents" in a software project
You control your own Agents. It is a decentralized development system.
4. What is the software license of these agents. Full EULA here, now.
A license gives you the right to create your own Agents and participate in the decentralized development system. We will publish the EULA when we release the product.
5. What kind of software architecture do these agents have. Daemons Responding to API calls ? Background daemons that make remote connection to listening applications?
Your Agent is a server that requires you to open a couple of ports so as to peer with both EC and BCH networks. If you run a BCH full node you will be familiar with this process. Your Agent will create a "job" for each contract it receives and is designed to operate thousands of jobs simultaneously in various stages of completion. It is your responsibility to manage your Agent and keep it open for business or risk losing market share to another developer capable of designing the same feature in a more reliable manner (or at better cost, less resource usage, faster design time etc.). For example, there is competition at every classification which is one reason emergent coding is on a fast path for improvement.
It is worth reiterating here that Agents are only used in the software design process and do not perform any role in the returned project binary.
6. What is the communication protocol these agents use.
The protocol is proprietary and is part of your license.
7. Are the agents patented? Who can use these agents?
It is up to you if you want to patent your Agent the underlying innovation behind emergent coding is _feasible_ developer specialization. Emergent coding gives you the ability to contribute to a project without revealing your intellectual property thus creating prospects for repeat business; It renders software patents moot.
Who uses your Agents? Your Agents earn you BCH with each design contribution made. It would be wise to have your Agent open for business at all times and encourage everyone to use your design service.
8. Do I need to cooperate with Code Valley company all of the time in order to deploy Emergent Coding on my software projects, or can I do it myself, using documentation?
It is a decentralized system. There is no single point of failure. Code Valley intends to defend the emergent coding ecosystem from abuse and bad actors but that role is not on your critical path.
9. Let's say Electron Cash is an Emergent Coding project. I have found a critical bug in the binary. How do I report this bug, what does Jonald Fyookball need to do, assuming the buggy component is a "shared component" puled from EC "repositories"?
If you built Electron Cash with emergent coding it will have been created by combining several high level wallet features designed into your project by their respective Agents. Obviously behind the scenes there are many more contracts that these Agents will let and so on. For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto. Should one of these 10,000 make a design error, Jonald only sees the high level Agents he contracted. He can easily pinpoint which of these contractors are in breach. Similarly this contractor can easily pinpoint which of its sub-contractors is in breach and so on. The offender that breached their contract wherever in the project they made their contribution, is easily identified. For example, when my truck has a warranty problem, I do not contact the supplier of the faulty big-end bearing, I simply take it back to Mazda who in turn will locate the fault.
Finally "...assuming the buggy component is a 'shared component' puled from EC 'repositories'?" - There are no repositories or "shared component" in emergent coding.
10. What is your licensing/pricing model? Per project? Per developer? Per machine?
Your Agent charges for each design contribution it makes (ie per contract). The exact fee is up to you. The resulting software produced by EC is unencumbered. Code Valley's pricing model consists of a seat license but while we are still determining the exact policy, we feel the "Valley" (where Agents advertise their wares) should charge a small fee to help prevent gaming the catalogue and a transaction fee to provide an income in proportion to operations.
11. What is the basic set of applications I need in order to deploy full Emergent Coding in my software project? What is the function of each application? Daemons, clients, APIs, Frontends, GUIs, Operating systems, Databases, NoSQLs? A lot of details, please.
There's just one. You buy a license and are issued with our product called Pilot. You run Pilot (node) up on your machine and it will peer with the EC and BCH networks. You connect your browser to Pilot typically via localhost and you're in business. You can build software (including special kinds of software like Agents) by simply combining available features. Pilot allows you to specify the desired features and will manage the contracts and decentralized build process. It also gives you access to the "Valley" which is a decentralized advertising site that contains all the "business cards" of each Agent in the community, classified into categories for easy search.
If we are to make a step change in software design, inventing yet another HLL will not cut it. As Fred Brooks puts it, an essential change is needed.
12. How can I trust a binary when I can not see the source?
The Emergent Coding development model is very different to what you are use to. There are ways of arriving at a binary without Source code.
The Agents in emergent coding design their feature into your project without writing code. We can see the features we select but can not demonstrate the source as the design process doesn't use a HLL.
The trust model is also different. The bulk of the testing happens _before_ the project is designed not _after_. Emergent Coding produces a binary with very high integrity and arguably far more testing is done in emergent coding than in incumbent methods you are used to.
In emergent coding, your reputation is built upon the performance of your Agent.
If your Agent produces substandard features, you are simply creating an opportunity for a competitor to increase their market share at your expense.
Here are some points worth noting regarding bad actor Agents:
  1. An Agent is a specialist and in emergent coding is unaware of the project they are contributing to. If you are a bad actor, do you compromise every contract you receive? Some? None?
  2. Your client is relying on the quality of your contribution to maintain their own reputation. Long before any client will trust your contributions, they will have tested you to ensure the quality is at their required level. You have to be at the top of your game in your classification to even win business. This isn't some shmuck pulling your routine from a library.
  3. Each contract to your agent is provisioned. Ie you advertise in advance what collaborations you require to complete your design. There is no opportunity for a "sign a Bitcoin transaction" Agent to be requesting "send an HTTP request" collaborations.
  4. Your Agent never gets to modify code, it makes a design contribution rather than a code contribution. There is no opportunity to inject anything as the mechanism that causes the code to emerge is a higher order complexity of all Agent involvement.
  5. There is near perfect accountability in emergent coding. You are being contracted and paid to do the design. Every project you compromise has an arrow pointed straight at you should it be detected even years later.
Security is a whole other ball game in emergent coding and current rules do not necessarily apply.
13. Every time someone rebuilds their application, do they have to pay over again for all "design contributions"? (Or is the ability to license components at fixed single price for at least a limited period or even perpetually, supported by the construction (agent) process?)
You are paying for the design. Every time you build (or rebuild) an application, you pay the developers involved. They do not know they are "rebuilding". This sounds dire but its costs far less than you think and there are many advantages. Automation is very high with emergent coding so software design is completed for a fraction of the cost of incumbent design methods. You could perhaps rebuild many time before matching incumbent methods. Adding features is hard with incumbent methods "..very few late-stage additions are required before the code base transforms from the familiar to a veritable monster of missed schedules, blown budgets and flawed products" (Brooks Jr 1987) whereas with emergent coding adding a late stage feature requires a rebuild and hence seamless integration. With Emergent Coding, you can add an unlimited number of features without risking the codebase as there isn't one.
The second part of your question incorrectly assumes software is created from licensed components rather than created by paying Agents to design features into your project without any licenses involved.
14. In this construction process, is the vendor of a particular "design contribution" able to charge differential rates per their own choosing? e.g. if I wanted to charge a super-low rate to someone from a 3rd world country versus charging slightly more when someone a global multinational corporation wants to license my feature?
Yes. Developers set the price and policy of their Agent's service. The Valley (where your Agent is presently advertised) presently only supports a simple price policy. The second part of your question incorrectly assumes features are encumbered with licenses. A developer can provide their feature without revealing their intellectual property. A client has the right to reuse a developer's feature in another project but will find it uneconomical to do so.
15. Is "entirely free" a supported option during the contract negotiation for a feature?
Yes. You set the price of your Agent.
16. "There is no single point of failure." Right now, it seems one needs to register, license the construction tech etc. Is that going to change to a model where your company is not necessarily in that loop? If not, don't you think that's a single point of failure?
It is a decentralized development system. Once you have registered you become part of a peer-to-peer system. Code Valley has thought long and hard about its role and has chosen the reddit model. It will set some rules for your participation and will detect or remove bad actors. If, in your view, Code Valley becomes a bad actor, you have control over your Agent, private keys and IP, you can leave the system at any time.
17. What if I can't obtain a license because of some or other jurisdictional problem? Are you allowed to license the technology to anywhere in the world or just where your government allows it?
We are planning to operate in all 150 countries. As ec is peer-to-peer, Code Valley does not need to register as a digital currency exchange or the like. Only those countries banning BCH will miss out (until such times as BCH becomes the first global electronic cash system).
18.
For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto.
It seems already a reasonably complex application, so well done in having that as a demo.
Thank you.
19. I asked someone else a question about how it would be possible to verify whether an application (let's say one received a binary executable) has been built with your system of emergent consensus. Is this possible?
Yes of course. If you used ec to build an application, you can sign it and claim anything you like. Your client knows it came from you because of your signature. The design contributions making up the application are not signed but surprisingly there is still perfect accountability (see below).
20. I know it is possible to identify for example all source files and other metadata (like build environment) that went into constructing a binary, by storing this data inside an executable.
All metadata emergent coding is now stored offline. When your Agent completes a job, you have a log of the design agreements you made with your peers etc., as part of the log. If you are challenged at a later date for breaching a design contract, you can pull your logs to see what decisions you made, what sub-contracts were let etc. As every Agent has their own logs, the community as a whole has a completely trustless log of each project undertaken.
21. Is this being done with EC build products and would it allow the recipient to validate that what they've been provided has been built only using "design contributions" cryptographically signed by their providers and nothing else (i.e. no code that somehow crept in that isn't covered by the contracting process)?
The emergent coding trust model is very effective and has been proven in other industries. Remember, your Agent creates a feature in my project by actually combining smaller features contracted from other Agents, thus your reputation is linked to that of your suppliers. If Bosch makes a faulty relay in my Ford, I blame Ford for a faulty car not Bosch when my headlights don't work. Similarly, you must choose and vet your sub-contractors to the level of quality that you yourself want to project. Once these relationships are set up, it becomes virtually impossible for a bad actor to participate in the system for long or even from the get go.
22. A look at code generated and a surprising answer to why is every intermediate variable spilled?
Thanks to u/R_Sholes, this snippet from the actual code for: number = number * 10 + digitgenerated as a part of: sub read/integeboolean($, 0, 100) -> guess
; copy global to local temp variable 0x004032f2 movabs r15, global.current_digit 0x004032fc mov r15, qword [r15] 0x004032ff mov rax, qword [r15] 0x00403302 movabs rdi, local.digit 0x0040330c mov qword [rdi], rax ; copy global to local temp variable 0x0040330f movabs r15, global.guess 0x00403319 mov r15, qword [r15] 0x0040331c mov rax, qword [r15] 0x0040331f movabs rdi, local.num 0x00403329 mov qword [rdi], rax ; multiply local variable by constant, uses new temp variable for output 0x0040332c movabs r15, local.num 0x00403336 mov rax, qword [r15] 0x00403339 movabs rbx, 10 0x00403343 mul rbx 0x00403346 movabs rdi, local.num_times_10 0x00403350 mov qword [rdi], rax ; add local variables, uses yet another new temp variable for output 0x00403353 movabs r15, local.num_times_10 0x0040335d mov rax, qword [r15] 0x00403360 movabs r15, local.digit 0x0040336a mov rbx, qword [r15] 0x0040336d add rax, rbx 0x00403370 movabs rdi, local.num_times_10_plus_digit 0x0040337a mov qword [rdi], rax ; copy local temp variable back to global 0x0040337d movabs r15, local.num_times_10_plus_digit 0x00403387 mov rax, qword [r15] 0x0040338a movabs r15, global.guess 0x00403394 mov rdi, qword [r15] 0x00403397 mov qword [rdi], rax For comparison, an equivalent snippet in C compiled by clang without optimizations gives this output: imul rax, qword ptr [guess], 10 add rax, qword ptr [digit] mov qword ptr [guess], rax 
Collaborations at the byte layer of Agents result in designs that spill every intermediate variable.
Firstly, why this is so?
Agents from this early version only support one catch-all variable design when collaborating. Similar to a compiler when all registers contain variables, the compiler must make a decision to spill a register temporarily to main memory. The compiler would still work if it spilled every variable to main memory but would produce code that would be, as above, hopelessly inefficient.
However, by only supporting the catch-all portion of the protocol, the code valley designers were able to design, build and deploy these agents faster because an Agent needs fewer predicates in order to participate in these simpler collaborations.
The protocol involved however, can have many "Policies" besides the catch-all default policy (Agents can collaborate over variables designed to be on the stack, or, as is common for intermediate variables, designed to use a CPU register, and so forth).
This example highlights one of the very exciting aspects of emergent coding. If we now add a handful of additional predicates to a handful of these byte layer agents, henceforth ALL project binaries will be 10x smaller and 10x faster.
Finally, there can be many Agents competing for market share at each of classification. If these "gumby" agents do not improve, you can create a "smarter" competitor (ie with more predicates) and win business away from them. Candy from a baby. Competition means the smartest agents bubble to the top of every classification and puts the entire emergent coding platform on a fast path for improvement. Contrast this with incumbent libraries which does not have a financial incentive to improve. Just wait until you get to see our production system.
23. How hard can an ADD Agent be?
Typically an Agent's feature is created by combining smaller features from other Agents. The smallest features are so devoid of context and complexity they can be rendered by designing a handful of bytes in the project binary. Below is a description of one of these "byte" layer Agents to give you an idea how they work.
An "Addition" Agent creates the feature of "adding two numbers" in your project (This is an actual Agent). That is, it contributes to the project design a feature such that when the project binary is delivered, there will be an addition instruction somewhere in it that was designed by the contract that was let to this Agent.
If you were this Agent, for each contract you received, you would need to collaborate with peers in the project to resolve vital requirements before you can proceed to design your binary "instruction".
Each paid contract your Agent receives will need to participate in at least 4 collaborations within the design project. These are:
  1. Input A collaboration
  2. Input B collaboration
  3. Result collaboration
  4. Construction site collaboration
You can see from the collaborations involved how your Agent can determine the precise details needed to design its instruction. As part of the contract, the Addition Agent will be provisioned with contact details so it can join these collaborations. Your Agent must collaborate with other stakeholders in each collaboration to resolve that requirement. In this case, how a variable will be treated. The stakeholders use a protocol to arrive at an Agreement and share the terms of the agreement. For example, the stakeholders of collaboration “Input A” may agree to treat the variable as an signed 64bit integer, resolve to locate it at location 0x4fff2, or alternatively agree that the RBX register should be used, or agree to use one of the many other ways a variable can be represented. Once each collaboration has reached an agreement and the terms of that agreement distributed, your Agent can begin to design the binary instruction. The construction site collaboration is where you will exactly place your binary bytes.
The construction site protocol is detailed in the whitepaper and is some of the magic that allows the decentralized development system to deliver the project binary. The protocol consists of 3 steps,
  1. You request space in the project binary be reserved.
  2. You are notified of the physical address of your requested space.
  3. You delver the the binary bytes you designed to fill the reserved space.
Once the bytes are returned your Agent can remove the job from its work schedule. Job done, payment received, another happy customer with a shiny ADD instruction designed into their project binary.
Note:
  1. Observe how it is impossible for this ADD Agent to install a backdoor undetected by the client.
  2. Observe how the Agent isn’t linking a module, or using a HLL to express the binary instruction.
  3. Observe how with just a handful of predicates you have a working "Addition" Agent capable of designing the Addition Feature into a project with a wide range of collaboration agreements.
  4. Observe how this Agent could conceivably not even design-in an ADD instruction if one of the design time collaboration agreements was a literal "1" (It would design in an increment instruction). There is even a case where this Agent may not deliver any binary to build its feature into your project!
24. How does EC arrive at a project binary without writing source code?
Devs using EC combine features to create solutions. They don't write code. EC devs contract Agents which design the desired features into their project for a fee. Emergent coding uses a domain specific contracting language (called pilot) to describe the necessary contracts. Pilot is not a general purpose language. As agents create their features by similarly combining smaller features contracted from peer, your desired features may inadvertently result in thousands of contracts. As it is agents all the way down, there is no source code to create the project binary.
Traditional: Software requirements -> write code -> compile -> project binary (ELF).
Emergent coding: Select desired features -> contract agents -> project binary (ELF).
Agents themselves are created the same way - specify the features you want your agent to have, contract the necessary agents for those features and viola - agent project binary (ELF).
25. How does the actual binary code that agents deliver to each other is written?
An agent never touches code. With emergent coding, agents contribute features to a project, and leave the project binary to emerge as the higher-order complexity of their collective effort. Typically, agents “contribute” their feature by causing smaller features to be contributed by peers, who in turn, do likewise. By mapping features to smaller features delivered by these peers, agents ensure their feature is delivered to the project without themselves making a direct code contribution.
Peer connections established by these mappings serve to both incrementally extend a temporary project “scaffold” and defer the need to render a feature as a code contribution. At the periphery of the scaffold, features are so simple they can be rendered as a binary fragment with these binary fragments using the information embodied by the scaffold to guide the concatenation back along the scaffold to emerge as the project binary - hence the term Emergent Coding.
Note the scaffold forms a temporary tree-like structure which allows virtually all the project design contracts to be completed in parallel. The scaffold also automatically limits an agent's scope to precisely the resources and site for their feature. It is why it is virtually impossible for an agent to install a "back door" or other malicious code into the project binary.
submitted by nlovisa to EmergentCoding [link] [comments]

Decred Journal — June 2018

Note: You can read this on GitHub, Medium or old Reddit to see the 207 links.

Development

The biggest announcement of the month was the new kind of decentralized exchange proposed by @jy-p of Company 0. The Community Discussions section considers the stakeholders' response.
dcrd: Peer management and connectivity improvements. Some work for improved sighash algo. A new optimization that gives 3-4x faster serving of headers, which is great for SPV. This was another step towards multipeer parallel downloads – check this issue for a clear overview of progress and planned work for next months (and some engineering delight). As usual, codebase cleanup, improvements to error handling, test infrastructure and test coverage.
Decrediton: work towards watching only wallets, lots of bugfixes and visual design improvements. Preliminary work to integrate SPV has begun.
Politeia is live on testnet! Useful links: announcement, introduction, command line voting example, example proposal with some votes, mini-guide how to compose a proposal.
Trezor: Decred appeared in the firmware update and on Trezor website, currently for testnet only. Next steps are mainnet support and integration in wallets. For the progress of Decrediton support you can track this meta issue.
dcrdata: Continued work on Insight API support, see this meta issue for progress overview. It is important for integrations due to its popularity. Ongoing work to add charts. A big database change to improve sorting on the Address page was merged and bumped version to 3.0. Work to visualize agenda voting continues.
Ticket splitting: 11-way ticket split from last month has voted (transaction).
Ethereum support in atomicswap is progressing and welcomes more eyeballs.
decred.org: revamped Press page with dozens of added articles, and a shiny new Roadmap page.
decredinfo.com: a new Decred dashboard by lte13. Reddit announcement here.
Dev activity stats for June: 245 active PRs, 184 master commits, 25,973 added and 13,575 deleted lines spread across 8 repositories. Contributions came from 2 to 10 developers per repository. (chart)

Network

Hashrate: growth continues, the month started at 15 and ended at 44 PH/s with some wild 30% swings on the way. The peak was 53.9 PH/s.
F2Pool was the leader varying between 36% and 59% hashrate, followed by coinmine.pl holding between 18% and 29%. In response to concerns about its hashrate share, F2Pool made a statement that they will consider measures like rising the fees to prevent growing to 51%.
Staking: 30-day average ticket price is 94.7 DCR (+3.4). The price was steadily rising from 90.7 to 95.8 peaking at 98.1. Locked DCR grew from 3.68 to 3.81 million DCR, the highest value was 3.83 million corresponding to 47.87% of supply (+0.7% from previous peak).
Nodes: there are 240 public listening and 115 normal nodes per dcred.eu. Version distribution: 57% on v1.2.0 (+12%), 25% on v1.1.2 (-13%), 14% on v1.1.0 (-1%). Note: the reported count of non-listening nodes has dropped significantly due to data reset at decred.eu. It will take some time before the crawler collects more data. On top of that, there is no way to exactly count non-listening nodes. To illustrate, an alternative data source, charts.dcr.farm showed 690 reachable nodes on Jul 1.
Extraordinary event: 247361 and 247362 were two nearly full blocks. Normally blocks are 10-20 KiB, but these blocks were 374 KiB (max is 384 KiB).

ASICs

Update from Obelisk: shipping is expected in first half of July and there is non-zero chance to meet hashrate target.
Another Chinese ASIC spotted on the web: Flying Fish D18 with 340 GH/s at 180 W costing 2,200 CNY (~340 USD). (asicok.comtranslated, also on asicminervalue)
dcrASIC team posted a farewell letter. Despite having an awesome 16 nm chip design, they decided to stop the project citing the saturated mining ecosystem and low profitability for their potential customers.

Integrations

bepool.org is a new mining pool spotted on dcred.eu.
Exchange integrations:
Two OTC trading desks are now shown on decred.org exchanges page.
BitPro payment gateway added Decred and posted on Reddit. Notably, it is fully functional without javascript or cookies and does not ask for name or email, among other features.
Guarda Wallet integrated Decred. Currently only in their web wallet, but more may come in future. Notable feature is "DCR purchase with a bank card". See more details in their post or ask their representative on Reddit. Important: do your best to understand the security model before using any wallet software.

Adoption

Merchants:
BlueYard Capital announced investment in Decred and the intent to be long term supporters and to actively participate in the network's governance. In an overview post they stressed core values of the project:
There are a few other remarkable characteristics that are a testament to the DNA of the team behind Decred: there was no sale of DCR to investors, no venture funding, and no payment to exchanges to be listed – underscoring that the Decred team and contributors are all about doing the right thing for long term (as manifested in their constitution for the project).
The most encouraging thing we can see is both the quality and quantity of high calibre developers flocking to the project, in addition to a vibrant community attaching their identity to the project.
The company will be hosting an event in Berlin, see Events below.
Arbitrade is now mining Decred.

Events

Attended:
Upcoming:

Media

stakey.club: a new website by @mm:
Hey guys! I'd like to share with you my latest adventure: Stakey Club, hosted at stakey.club, is a website dedicated to Decred. I posted a few articles in Brazilian Portuguese and in English. I also translated to Portuguese some posts from the Decred Blog. I hope you like it! (slack)
@morphymore translated Placeholder's Decred Investment Thesis and Richard Red's write-up on Politeia to Chinese, while @DZ translated Decred Roadmap 2018 to Italian and Russian, and A New Kind of DEX to Italian and Russian.
Second iteration of Chinese ratings released. Compared to the first issue, Decred dropped from 26 to 29 while Bitcoin fell from 13 to 17. We (the authors) restrain ourselves commenting on this one.
Videos:
Audio:
Featured articles:
Articles:

Community Discussions

Community stats: Twitter followers 40,209 (+1,091), Reddit subscribers 8,410 (+243), Slack users 5,830 (+172), GitHub 392 stars and 918 forks of dcrd repository.
An update on our communication systems:
Jake Yocom-Piatt did an AMA on CryptoTechnology, a forum for serious crypto tech discussion. Some topics covered were Decred attack cost and resistance, voting policies, smart contracts, SPV security, DAO and DPoS.
A new kind of DEX was the subject of an extensive discussion in #general, #random, #trading channels as well as Reddit. New channel #thedex was created and attracted more than 100 people.
A frequent and fair question is how the DEX would benefit Decred. @lukebp has put it well:
Projects like these help Decred attract talent. Typically, the people that are the best at what they do aren’t driven solely by money. They want to work on interesting projects that they believe in with other talented individuals. Launching a DEX that has no trading fees, no requirement to buy a 3rd party token (including Decred), and that cuts out all middlemen is a clear demonstration of the ethos that Decred was founded on. It helps us get our name out there and attract the type of people that believe in the same mission that we do. (slack)
Another concern that it will slow down other projects was addressed by @davecgh:
The intent is for an external team to take up the mantle and build it, so it won't have any bearing on the current c0 roadmap. The important thing to keep in mind is that the goal of Decred is to have a bunch of independent teams on working on different things. (slack)
A chat about Decred fork resistance started on Twitter and continued in #trading. Community members continue to discuss the finer points of Decred's hybrid system, bringing new users up to speed and answering their questions. The key takeaway from this chat is that the Decred chain is impossible to advance without votes, and to get around that the forker needs to change the protocol in a way that would make it clearly not Decred.
"Against community governance" article was discussed on Reddit and #governance.
"The Downside of Democracy (and What it Means for Blockchain Governance)" was another article arguing against on-chain governance, discussed here.
Reddit recap: mining rig shops discussion; how centralized is Politeia; controversial debate on photos of models that yielded useful discussion on our marketing approach; analysis of a drop in number of transactions; concerns regarding project bus factor, removing central authorities, advertising and full node count – received detailed responses; an argument by insette for maximizing aggregate tx fees; coordinating network upgrades; a new "Why Decred?" thread; a question about quantum resistance with a detailed answer and a recap of current status of quantum resistant algorithms.
Chats recap: Programmatic Proof-of-Work (ProgPoW) discussion; possible hashrate of Blake-256 miners is at least ~30% higher than SHA-256d; how Decred is not vulnerable to SPV leaf/node attack.

Markets

DCR opened the month at ~$93, reached monthly high of $110, gradually dropped to the low of $58 and closed at $67. In BTC terms it was 0.0125 -> 0.0150 -> 0.0098 -> 0.0105. The downturn coincided with a global decline across the whole crypto market.
In the middle of the month Decred was noticed to be #1 in onchainfx "% down from ATH" chart and on this chart by @CoinzTrader. Towards the end of the month it dropped to #3.

Relevant External

Obelisk announced Launchpad service. The idea is to work with coin developers to design a custom, ASIC-friendly PoW algorithm together with a first batch of ASICs and distribute them among the community.
Equihash-based ZenCash was hit by a double spend attack that led to a loss of $450,000 by the exchange which was targeted.
Almost one year after collecting funds, Tezos announced a surprise identification procedure to claim tokens (non-javascript version).
A hacker broke into Syscoin's GitHub account and implanted malware stealing passwords and private keys into Windows binaries. This is a painful reminder for everybody to verify binaries after download.
Circle announced new asset listing framework for Poloniex. Relevant to recent discussions of exchange listing bribery:
Please note: we will not accept any kind of payment to list an asset.
Bithumb got hacked with a $30 m loss.
Zcash organized Zcon0, an event in Canada that focused on privacy tech and governance. An interesting insight from Keynote Panel on governance: "There is no such thing as on-chain governance".
Microsoft acquired GitHub. There was some debate about whether it is a reason to look into alternative solutions like GitLab right now. It is always a good idea to have a local copy of Decred source code, just in case.
Status update from @sumiflow on correcting DCR supply on various sites:
To begin with, none of the below sites were showing the correct supply or market cap for Decred but we've made some progress. coingecko.com, coinlib.io, cryptocompare.com, livecoinwatch.com, worldcoinindex.com - corrected! cryptoindex.co, onchainfx.com - awaiting fix coinmarketcap.com - refused to fix because devs have coins too? (slack)

About This Issue

This is the third issue of Decred Journal after April and May.
Most information from third parties is relayed directly from source after a minimal sanity check. The authors of Decred Journal have no ability to verify all claims. Please beware of scams and do your own research.
The new public Matrix logs look promising and we hope to transition from Slack links to Matrix links. In the meantime, the way to read Slack links is explained in the previous issue.
As usual, any feedback is appreciated: please comment on Reddit, GitHub or #writers_room. Contributions are welcome too, anything from initial collection to final review to translations.
Credits (Slack names, alphabetical order): bee and Richard-Red. Special thanks to @Haon for bringing May 2018 issue to medium.
submitted by jet_user to decred [link] [comments]

Pieter Wuille on Building Bitcoin - YouTube What Is Bitcoin Private (BTCP)? How To Mine It ⚒️ And Get For Free? Bitcoin miner 10061 error FIX How To Transfer Bitcoin From Coinbase To Your Own Wallet ...

A modified Base 58 binary-to-text encoding known as Base58Check is used for encoding Bitcoin addresses.. More generically, Base58Check encoding is used for encoding byte arrays in Bitcoin into human-typable strings. Background. The original Bitcoin client source code explains the reasoning behind base58 encoding: This is formed exactly the same as a Bitcoin address, except that 0x80 is used for the version/application byte, and the payload is 32 bytes instead of 20 (a private key in Bitcoin is a single 32-byte unsigned big-endian integer). For private keys associated with an uncompressed public key, such encodings will always yield a 51-character string that starts with '5', or more specifically ... This is a followup to #10783. The first commit doesn't change behavior at all, just simplifies code. The second commit just changes RPC methods to treat null arguments the same as missing arguments instead of throwing type errors. The third commit updates developer notes after the cleanup. The forth commit does some additional code cleanup in getbalance. Bitcoin Core is extensively tested on multiple operating systems using the Linux kernel, macOS 10.8+, and Windows Vista and later. Windows XP is not supported. Bitcoin Core should also work on most other Unix-like systems but is not frequently tested on them. Notable changes Network fork safety enhancements. A number of changes to the way Bitcoin Core deals with peer connections and invalid ... If your intention is to learn as much about the bitcoin implementation as possible, I would suggest you give up on bitcoin-core and shift to one of the python implementations. Now, there are a few quirks about the python implementations , but your learning would be much faster and you'd be in a much better position to understand the cpp code later.

[index] [48922] [29988] [20940] [16359] [32905] [44536] [36395] [26362] [48668] [36843]

Pieter Wuille on Building Bitcoin - YouTube

OPEN SOURCE: Bitcoin Private's entire codebase is publicly available for viewing and verification by anyone. It will always remain open source and be maintained by an ever-growing community. Buying Bitcoin from exchanges like Coinbase is a great way to get into cryptocurrency, but storing your coins on exchanges is not always best. Here I show yo... bitcoin gold on bittrex in hindi bitcoin gold problem, coinbase not supported , BTG WebSite: http://www.earningtips4u.com Facebook: https://goo.gl/FoH65x ... That markets choose to identify a particular network with a particular ticker symbol does not make that network “The Real Bitcoin.” That a particular network... Today, we dig deep into the code of Bitconnect. From the coin to the wallet, to the nodes and the network, we look at the what powers Bitconnect behind the scenes. Bitconnect is the newest way to ...

#