Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERC: Ethereum Name Service #137

Closed
Arachnid opened this issue Aug 5, 2016 · 45 comments
Closed

ERC: Ethereum Name Service #137

Arachnid opened this issue Aug 5, 2016 · 45 comments

Comments

@Arachnid
Copy link
Contributor

Arachnid commented Aug 5, 2016

Now at https://eips.ethereum.org/EIPS/eip-137

@axic
Copy link
Member

axic commented Aug 5, 2016

function content(bytes32 node) constant returns (bytes32);

Is that intentionally limited to 256 bit hashes (Swarm)?

IPFS hashes can be >256 bits, even the regular SHA256 hash is bigger, because of the multihash wrapping. One could removed the Base-58 encoding, but the hash type shouldn't be removed.

Additionally, if only using a SHA256 of an IPFS resource, there's no way to distinguish it from a Swarm resource.

@Arachnid
Copy link
Contributor Author

Arachnid commented Aug 5, 2016

Good point. I was merely seeking to replicate the existing namereg's 'content()' method, but this could be a good opportunity to rethink it. Any suggestions?

@JustinDrake
Copy link

Just throwing in a couple of ideas taken from BlockchainID to mitigate squatting, which mimic the behaviour of domain names:

  1. To register a name, you need a pay a fee (on top of miner's fee)
  2. The shorter the name, the bigger the fee
  3. Every year, the name needs to be renewed by repaying the fee

@jph108
Copy link

jph108 commented Aug 5, 2016

This is a nice idea. It might also be really convenient to have named Ethereum addresses. These names could be anything, but a really simple idea might be to favor an email address format. If I want to send ETH to '[email protected]', I'd much rather type in their email address (which I have anyway), than a big Ethereum address. Of course, some verification protocol would be needed (confirm by sending an email to the actual email address with a confirmation code and getting a response), otherwise people could register each other's email addresses. But with that in place, voila! Paypal-on-Ethereum. A side benefit could be if the name doesn't resolve, the transfer doesn't get made, so you can't send to a bad address.

It might also be nice to add pathnames in a URL format, so:
'[email protected]/wallet1' and '[email protected]/wallet2'.

And why not add parameters? So:
[email protected]/wallet1/subwallet4?category=food

And while we're dreaming, an address book in Mist for all this. :-)

If the Ethereum Foundation ran this as a service, a named address could cost a little extra gas, which could go to development expenses.

@rabbit
Copy link

rabbit commented Aug 5, 2016

@jph108 email identifiers with path arguments conflates with HTTP Basic Auth (ie: https://[email protected]/example?q=1) so it's best to keep identifiers from being clever. Specially since we fail to parse email identifiers as-is. However, you could achieve a similar effect by using Webfinger / LRDD style lookups to transform the email identifier into a url identifier. Could also use that to directly link to the ENS node.

@akomba
Copy link

akomba commented Aug 5, 2016

Great idea. One of the issues we need to cover is to try to avoid multiple, competing deployments. So we don't end up with situations where I own the domain name in 0xContractA, and you own the same name in 0xContractB.

In other words, multiple, competing deployments could undermine the usefulness of the system.

Obviously, one of the big factors is price. So while I agree with @JustinDrake 's suggestion, the system should avoid requiring to pay an extra fee, because that would ignite a competition based on price. ("My Registrar Is Cheaper Than Yours!")

So the key is either universal or complementary adoption (eg. multiple deployments would still work in the same "namespace"). That does indeed leave us with a potential squatting problem, though.

@jph108
Copy link

jph108 commented Aug 5, 2016

@rabbit - Thanks, food for thought. You sent me to Wikipedia :-)

It might be worth thinking in terms of having two schemes and not mixing them. The current https scheme for URLs is:
https:[//[user:password@]host[:port]][/]path[?query][#fragment]

In Ethereum's context, does it make sense to support [user:password@]? Maybe [user@] only, or nothing. But anyway this is a URL scheme.

My thought is a different additional scheme to support account & contract addresses only. You wouldn't put this kind of address in a location field in a browser.

But if you could send ETH to ethaddr://name[/path][?query], that would be neat. name could be anything, but certain names in the format [email protected] could be validated through some service. Admittedly, that could get into requiring some kind of trusted system, but maybe there's a way around that. Sending to name/path could resolve to a different address than name alone.

I would love to be able to send ETH to [email protected]/poloniex, and have it show up in my Poloniex account, without having to copy and paste a big long address.

[EDIT] removed the brackets around name

@jpritikin
Copy link

@JustinDrake I appreciate the need to mitigate squatting, but it would be nice if there was some namespace in ENS that did not require an annual fee. How about a registrar that requires a small deposit (say 1 ETH or some stable coin) and the owner can hold that name indefinitely. When the name is relinquished then the deposit is returned. The top-level name for such a registrar could be ".develop" (something a bit less prestigious than ".com" or ".org") to preserve demand for higher prestige names and mitigate squatting.

@JustinDrake
Copy link

@jpritikin The annual fee also solves the problem of permanent loss of a name. If I register a name but lose the private key, then the name is gone.

@Deozaan
Copy link

Deozaan commented Aug 6, 2016

@JustinDrake @jpritikin why not a combination of both ideas? The "fee" is actually a deposit which is automatically returned after a year, at which point another deposit needs to be made within a grace period or else the domain becomes available for anyone to claim.

@Arachnid
Copy link
Contributor Author

Arachnid commented Aug 6, 2016

Just throwing in a couple of ideas taken from BlockchainID to mitigate squatting, which mimic the behaviour of domain names

I'd very much like to discuss governance, but I'm also trying to separate concerns, so I'd like to keep this EIP entirely about the technical implementation. As written, it's governance-independent, so we can solve the problems independently - and, in fact, we can have multiple TLDs or 2LDs that each have different governance rules. Please do join the name-registry channel on Gitter if you'd like to talk about governance. :)

Likewise, URI format for wallet URIs etc is out-of-scope except insofar as it impacts name resolution. Just like DNS, the 'name' part of the URI is handled by ENS and the rest is a layer on top.

@Arachnid
Copy link
Contributor Author

Arachnid commented Aug 6, 2016

I've been thinking about the problem of how to handle resolvers that don't implement all resource types. Presently, Solidity doesn't guarantee the return value of a call to a contract that doesn't implement the function called, which is problematic. My suggestion is:

  • Require resolvers to have a fallback function that calls throw
  • Require resolvers to implement a function hasCapability(bytes32 cap) returns (bool); when called with a capability name, returns true if the resolver implements that name.

This allows the naive case - resolving a name and throwing if it doesn't exist - to proceed without any extra checks or overhead, while still permitting onchain resolvers to handle nonexistent names gracefully if they wish, at a small extra cost in gas. Capability names would be registered in this ERC; users could define 'x-' prefixed names for ad-hoc and pre-standardised extensions.

Thoughts?

@axic
Copy link
Member

axic commented Aug 8, 2016

As we've discussed on Gitter, I'm trying to figure out a way to support both Swarm and IPFS locations with Mango and the very same issue applies to ENS with the Content hash record type.

Given that content addressable networks can use different hashing algorithms (and in many cases the actual content is prefixed with a header, thus the hash isn't solely about the submitted content) it seem reasonable to make the Content hash record type a bit more explicit.

One way is to have specific record types for each network, i.e. two record types: swarmContent, ipfsContent.

Alternatively the content could be a URL, including bzz:// and IPFS's choice of prefix. See the lengthy discussion about the IPFS prefixes here (summary: they would pretty much prefer having /ipfs/, but also seems to be OK with fs:/ipfs/)

@Arachnid
Copy link
Contributor Author

Arachnid commented Aug 8, 2016

RFC6920 might be worth a read, as it provides an RFC-standardized way to encode hash names.

@wanderer wanderer added the ERC label Aug 20, 2016
@Arachnid
Copy link
Contributor Author

@axic Thinking about this further, I think the best approach is probably to have separate record types for Swarm, IPFS, and any other systems as necessary. With that in mind, it's probably sensible to rename content to something more specific - swarmHash perhaps?

@A2be
Copy link

A2be commented Aug 26, 2016

I have no insight at present about the best way to implement an Ethereum Name Service (ENS), so am agnostic on the particulars at present.

I do think it is a very good idea and will benefit the Ethereum ecosphere to have such an ENS in place, and available in such a way as to reduce transaction cost for usage of such names by the broader community, where they might be used in such a way as to reduce systemic inefficiencies.

In my view, long-term cybersquatting on unused names while adding no value is economically inefficient to the wider community, so very pleased to see serious thought on how best to implement a decentralized ENS that deals with several of the worst aspects of such squatting. Please keep up the good work.

@Arachnid
Copy link
Contributor Author

I've renamed setOwner's three argument overload to setSubnodeOwner; on working on implementations it's become clear that support for overloaded functions in other environments is poor at best, and I'd rather reduce the friction.

@axic
Copy link
Member

axic commented Sep 1, 2016

@Arachnid:

With that in mind, it's probably sensible to rename content to something more specific - swarmHash perhaps?

Will it resolve to a single Swarm hash or a hash to a manifest and a path?

@Arachnid
Copy link
Contributor Author

Arachnid commented Sep 1, 2016

Will it resolve to a single Swarm hash or a hash to a manifest and a path?

Just to a single swarm manifest hash; much like DNS, I think links and redirects are out of scope for ENS.

@axic
Copy link
Member

axic commented Sep 1, 2016

Can it be only a manifest hash or it coulf also return a content hash and it is up to the consumer of ENS to deal with that?

@Arachnid
Copy link
Contributor Author

Arachnid commented Sep 1, 2016

Yes, it could be either - in a URL, the protocol will tell the browser how to interpret the hash.

@axic
Copy link
Member

axic commented Sep 1, 2016

Do you mean the ENS protocol or Swarm? If ENS, does that mean having two types (swarmContent and swarmManifest)?

@Arachnid
Copy link
Contributor Author

Arachnid commented Sep 1, 2016

I mean the 'protocol' part of the URL - eg, "http://", "bzz://", etc - browsers and other tools use this to determine what do do with the name and path components of the URL. Swarm provides two protocol values, one for raw files, and one for manifests.

@axic
Copy link
Member

axic commented Sep 1, 2016

Oh, so your proposed swarmHash returns the bzz URL?

@Arachnid
Copy link
Contributor Author

Arachnid commented Sep 1, 2016

Oh, so your proposed swarmHash returns the bzz URL?

No, it just returns the hash. The user enters "bzz://name.eth/foo/bar" or "bzzd://name.eth/"; the browser or other program does a swarmHash lookup on name.eth, and then treats the returned hash appropriately depending on the URI scheme the user supplied.

@Arachnid
Copy link
Contributor Author

Arachnid commented Sep 1, 2016

Actually, I'm beginning to swing back the other way, in favor of naming it something generic like 'keccak' or 'sha256', because there are many other possible applications for storing a hash in ENS, such as attesting what the hash of your PGP keys is. IPFS support would then be possible either by adding methods for the other hash protocols IPFS supports, or by adding a 'CNAME' type entry that resolves to an address string that IPFS understands.

@axic
Copy link
Member

axic commented Sep 10, 2016

To summarise our discussion, perhaps a good idea is to go with multihash for the IPFS hashes and as you've mentioned sha256 and sha3/keccak.

Important to note that sha3 is reserved in Solidity and keccak or keccak256 might become a builtin function, therefore my suggestion is keccak_256. Your idea of pre/postfixing with hash works also: keccak256hash, sha256hash, etc.

@Arachnid
Copy link
Contributor Author

Per discussion on Gitter, I've removed the content-hash resource type from the spec, so we can take our time on standardizing it independently of the main ENS spec.

@Arachnid
Copy link
Contributor Author

Additional thought: Should we add TTLs to the main registry contract? This would remove the need for individual record types to define them, and wouldn't entail much extra complexity. TTLs could default to 0, meaning no caching is allowed, and meaning there's no storage overhead or extra gas costs unless the user chooses to set a TTL.

@nagydani
Copy link
Contributor

Can resolvers be just entries in lookup tables rather than contracts? The fundamental problem with contracts in this context is that they can return different values to different callers thereby defeating the purpose of a consensus naming system in which the same name resolves to the same set of records to everyone. As a practical problem, external contracts depending on conditions upon the content of certain records associated with certain names are rendered meaningless, if the resolver can return different records to different callers.

@Arachnid
Copy link
Contributor Author

Can resolvers be just entries in lookup tables rather than contracts? The fundamental problem with contracts in this context is that they can return different values to different callers thereby defeating the purpose of a consensus naming system in which the same name resolves to the same set of records to everyone.

There are lots of valuable use-cases for flexible resolvers; think of load balancing and geographically determined IP records in DNS, for instance, or a resolver that allows anyone to update its mappings if they can prove something about the correctness of the new value.

At the same time, there's room for more strictured resolvers; for instance, one could deploy a general purpose resolver that simply does table lookups, or even one that's entirely immutable, and the caller can check if the resolver being used for a name is one that is known to enforce this limitation before resolving. You could even create a registrar that keeps ownership of the subdomains and only allows a given approved resolver to be provisioned on them.

@nagydani
Copy link
Contributor

I was under the impression that mapping updates are the responsibility of the registrar, not the resolver; the resolver is the one that generates the mapping from information available to it. AFAIK, load balancing in DNS is typically done by multiple records corresponding to the same name. In general, AFAIK, zone files are not dynamically generated; they are clearly data and not code. But I might be wrong; I do not have extendisve experience with DNS.

Having the caller check whether a certain resolver fulfills certain restrictions seems like a leaking abstraction to me. The end-user's concern is to have a certain name resolve to a certain address; with the current design of programmatic resolvers, it cannot be directly contracted. Is the additional flexibility of programmatic resolvers worth the loss of preventing what seems like the most common use case?

@Arachnid
Copy link
Contributor Author

I was under the impression that mapping updates are the responsibility of the registrar, not the resolver; the resolver is the one that generates the mapping from information available to it.

The registrar's job is to update the registry's mapping from name hash to owner. So the process goes something like this:

  1. User buys a domain. Registrar updates registry owner record of the newly-purchased domain to the person who bought it.
  2. (Optional) user deploys a resolver contract that operates the way they want, or optionally uses an already-deployed public one.
  3. User updates the resolver record for their name in the registry, pointing it at the newly deployed resolver.
  4. User sets records in the resolver to resolve whatever resources they have.

After the initial deployment, the user only needs to do 4. for each update; there's no involvement from the registrar or registry any longer.

AFAIK, load balancing in DNS is typically done by multiple records corresponding to the same name. In general, AFAIK, zone files are not dynamically generated; they are clearly data and not code. But I might be wrong; I do not have extendisve experience with DNS.

Basic nameservers operate like this, but many are more sophisticated; for instance, if you do an A lookup on google.com, the IP addresses that come back will depend on where in the world you are.

Having the caller check whether a certain resolver fulfills certain restrictions seems like a leaking abstraction to me.

Well, then there's the alternative I mentioned - a registrar that restricts which resolvers may be deployed to subdomains you register through it.

The end-user's concern is to have a certain name resolve to a certain address; with the current design of programmatic resolvers, it cannot be directly contracted. Is the additional flexibility of programmatic resolvers worth the loss of preventing what seems like the most common use case?

Yes. The whole future-proofing of the system rests on the premise that resolvers can be improved and extended in the future; without it, we're stuck with whatever record types we can think up right now. With flexible resolver contracts, people can define new record types to resolve new resources in future, such as IPFS, IP addresses, cryptographic keys, and so forth, without requiring any changes to registrars or registry.

What's the actual threat you're concerned about here? For the most part, my expectation is that any nefariousness an owner can conduct with name resolution, they can do just as easily with a table-driven resolver as they can with a more sophisticated one.

@axic
Copy link
Member

axic commented Sep 30, 2016

@Arachnid:

Important to note that sha3 is reserved in Solidity and keccak or keccak256 might become a builtin function, therefore my suggestion is keccak_256. Your idea of pre/postfixing with hash works also: keccak256hash, sha256hash, etc.

The decision is to use keccak256 in Solidity, so probably using keccak256hash is a good idea (or just resorting to the binary representation of multihash or the RFC).

@Arachnid
Copy link
Contributor Author

Arachnid commented Nov 29, 2016

So, after discussion with @chriseth around #165, I want to suggest a change to the resolver specification:

Instead of supporting a has method, all resovlers must support the following method:

function supportsInterface(bytes4 interfaceID) constant returns (bool)

interfaceID is a 4 byte identifier generated by XORing the signature hashes of all functions contained in the interface. In the case of the addr profile, this would be just the signature hash of the addr function, 0x3b3b57de.

This is more general purpose than the current has method, and doesn't require central approval to add new resource types.

This would also require changing the semantics for records that are supported by the resolver but do not exist to return the default value (eg, 0 for addr) instead of throwing.

Thoughts?

@Arachnid
Copy link
Contributor Author

After discussion with @chriseth and others on Gitter, I've updated the spec accordingly. I'll deploy a new resolver to Ropsten that supports both the old has discovery and the new supportsInterface discovery.

@Arachnid
Copy link
Contributor Author

I've updated the wording of the spec slightly to explicitly prohibit ASCII characters that are invalid in names.

I think it's probably worth providing a canonical set of test vectors for names, too - sample names and the byte strings they should convert to - in order for implementers to check their implementations against the standard.

@danfinlay
Copy link
Contributor

danfinlay commented Mar 8, 2017

The sample test vectors in this post do not actually represent the keccak_256 that was agreed on.

Can we get some up-to-date test vectors here? There is one sample for eth on the read-the-docs.

EDIT: I'm unclear what was agreed on. The readhtedocs say keccak_256, but The JS lib test suite is using the same thing listed here.

@cpacia
Copy link

cpacia commented Mar 12, 2017

I'm a little late to comment but would it make sense to define a "valid" name as the state at the chaintip-300 (or something similar).

If you check the state of a recently updated name at the tip it could easily be forked off in a reorg. It's probably even more important to define a depth for light client purposes as they don't validate anything.

@maurelian
Copy link
Contributor

@cpacia I think you make a good point. A good feature for clients to implement might be warning a user that a name has recently changed ownership or the address it resolves to.

I'm also thinking now about the TTL property that can be set on a node in the registry. An owner could set the TTL quite high, then sell the name hoping that clients would keep their address cached, and continue to direct transactions to them. Some agreement on what is a safe TTL would be helpful for implementers.

@Arachnid
Copy link
Contributor Author

I've opened a PR for this: #629

@veox
Copy link
Contributor

veox commented May 15, 2017

PR #629 has been merged: EIP 137.

@Arachnid
Copy link
Contributor Author

@FlySwatter Sorry, I just saw your comment. Why do you think the test vectors don't match? They seem fine to me.

@cpacia I think that's out of scope for this, since your concern applies to all contract interactions.

@maurelian Good point; we should specify a maximum TTL clients should observe, and applications that exchange domains should warn users if the TTL is high.

@DevCon1100

This comment was marked as spam.

@PrakharS12
Copy link

i want to integrate ens domain purchase functionality on my application
please suggest me apis or javascript sdk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests