gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-SVN] [taler-docs] branch master updated (41e0280 -> 8bfd85f)


From: gnunet
Subject: [GNUnet-SVN] [taler-docs] branch master updated (41e0280 -> 8bfd85f)
Date: Fri, 27 Sep 2019 00:55:08 +0200

This is an automated email from the git hooks/post-receive script.

dold pushed a change to branch master
in repository docs.

    from 41e0280  document return codes
     new c8bbc6c  ebics order type xrefs
     new 9d15a77  sphinx extension: ebics domain
     new c3d5555  sphinx extension: typescript domain
     new a1ce922  replace old extension with new one
     new 8bfd85f  libeufin: ebics/nots

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _exts/ebicsdomain.py      | 226 +++++++++++++++++++
 _exts/tslex.py            |  88 --------
 _exts/tsref.py            | 241 --------------------
 _exts/typescriptdomain.py | 545 ++++++++++++++++++++++++++++++++++++++++++++++
 conf.py                   |   5 +-
 libeufin/api-sandbox.rst  |   9 +-
 libeufin/ebics.rst        | 212 ++++++++++++------
 7 files changed, 928 insertions(+), 398 deletions(-)
 create mode 100644 _exts/ebicsdomain.py
 delete mode 100644 _exts/tslex.py
 delete mode 100644 _exts/tsref.py
 create mode 100644 _exts/typescriptdomain.py

diff --git a/_exts/ebicsdomain.py b/_exts/ebicsdomain.py
new file mode 100644
index 0000000..70b15c2
--- /dev/null
+++ b/_exts/ebicsdomain.py
@@ -0,0 +1,226 @@
+"""
+EBICS documentation domain.
+
+:copyright: Copyright 2019 by Taler Systems SA
+:license: LGPLv3+
+:author: Florian Dold
+"""
+
+import re
+
+from docutils import nodes
+from typing import List, Optional, Iterable, Dict, Tuple
+from typing import cast
+
+from docutils import nodes
+from docutils.nodes import Element, Node
+from docutils.statemachine import StringList
+
+from sphinx import addnodes
+from sphinx.roles import XRefRole
+from sphinx.domains import Domain, ObjType, Index
+from sphinx.directives import directives
+from sphinx.util.docutils import SphinxDirective
+from sphinx.util.nodes import make_refnode
+from sphinx.util import logging
+
+logger = logging.getLogger(__name__)
+
+def make_glossary_term(env: "BuildEnvironment", textnodes: Iterable[Node], 
index_key: str,
+                       source: str, lineno: int, new_id: str = None) -> 
nodes.term:
+    # get a text-only representation of the term and register it
+    # as a cross-reference target
+    term = nodes.term('', '', *textnodes)
+    term.source = source
+    term.line = lineno
+
+    gloss_entries = env.temp_data.setdefault('gloss_entries', set())
+    termtext = term.astext()
+    if new_id is None:
+        new_id = nodes.make_id('ebics-order-' + termtext.lower())
+        if new_id == 'ebics-order':
+            # the term is not good for node_id.  Generate it by sequence 
number instead.
+            new_id = 'ebics-order-%d' % env.new_serialno('ebics')
+    while new_id in gloss_entries:
+        new_id = 'ebics-order-%d' % env.new_serialno('ebics')
+    gloss_entries.add(new_id)
+
+    ebics = env.get_domain('ebics')
+    ebics.add_object('order', termtext.lower(), env.docname, new_id)
+
+    term['ids'].append(new_id)
+    term['names'].append(new_id)
+
+    return term
+
+
+def split_term_classifiers(line: str) -> List[Optional[str]]:
+    # split line into a term and classifiers. if no classifier, None is used..
+    parts = re.split(' +: +', line) + [None]
+    return parts
+
+
+class EbicsOrders(SphinxDirective):
+    has_content = True
+    required_arguments = 0
+    optional_arguments = 0
+    final_argument_whitespace = False
+    option_spec = {
+        'sorted': directives.flag,
+    }
+
+    def run(self):
+        node = addnodes.glossary()
+        node.document = self.state.document
+
+        # This directive implements a custom format of the reST definition list
+        # that allows multiple lines of terms before the definition.  This is
+        # easy to parse since we know that the contents of the glossary *must
+        # be* a definition list.
+
+        # first, collect single entries
+        entries = []  # type: List[Tuple[List[Tuple[str, str, int]], 
StringList]]
+        in_definition = True
+        in_comment = False
+        was_empty = True
+        messages = []  # type: List[nodes.Node]
+        for line, (source, lineno) in zip(self.content, self.content.items):
+            # empty line -> add to last definition
+            if not line:
+                if in_definition and entries:
+                    entries[-1][1].append('', source, lineno)
+                was_empty = True
+                continue
+            # unindented line -> a term
+            if line and not line[0].isspace():
+                # enable comments
+                if line.startswith('.. '):
+                    in_comment = True
+                    continue
+                else:
+                    in_comment = False
+
+                # first term of definition
+                if in_definition:
+                    if not was_empty:
+                        messages.append(self.state.reporter.warning(
+                            _('glossary term must be preceded by empty line'),
+                            source=source, line=lineno))
+                    entries.append(([(line, source, lineno)], StringList()))
+                    in_definition = False
+                # second term and following
+                else:
+                    if was_empty:
+                        messages.append(self.state.reporter.warning(
+                            _('glossary terms must not be separated by empty 
lines'),
+                            source=source, line=lineno))
+                    if entries:
+                        entries[-1][0].append((line, source, lineno))
+                    else:
+                        messages.append(self.state.reporter.warning(
+                            _('glossary seems to be misformatted, check 
indentation'),
+                            source=source, line=lineno))
+            elif in_comment:
+                pass
+            else:
+                if not in_definition:
+                    # first line of definition, determines indentation
+                    in_definition = True
+                    indent_len = len(line) - len(line.lstrip())
+                if entries:
+                    entries[-1][1].append(line[indent_len:], source, lineno)
+                else:
+                    messages.append(self.state.reporter.warning(
+                        _('glossary seems to be misformatted, check 
indentation'),
+                        source=source, line=lineno))
+            was_empty = False
+
+        # now, parse all the entries into a big definition list
+        items = []
+        for terms, definition in entries:
+            termtexts = []          # type: List[str]
+            termnodes = []          # type: List[nodes.Node]
+            system_messages = []    # type: List[nodes.Node]
+            for line, source, lineno in terms:
+                parts = split_term_classifiers(line)
+                # parse the term with inline markup
+                # classifiers (parts[1:]) will not be shown on doctree
+                textnodes, sysmsg = self.state.inline_text(parts[0], lineno)
+
+                # use first classifier as a index key
+                term = make_glossary_term(self.env, textnodes, parts[1], 
source, lineno)
+                term.rawsource = line
+                system_messages.extend(sysmsg)
+                termtexts.append(term.astext())
+                termnodes.append(term)
+
+            termnodes.extend(system_messages)
+
+            defnode = nodes.definition()
+            if definition:
+                self.state.nested_parse(definition, definition.items[0][1],
+                                        defnode)
+            termnodes.append(defnode)
+            items.append((termtexts,
+                          nodes.definition_list_item('', *termnodes)))
+
+        if 'sorted' in self.options:
+            items.sort(key=lambda x:
+                       unicodedata.normalize('NFD', x[0][0].lower()))
+
+        dlist = nodes.definition_list()
+        dlist['classes'].append('glossary')
+        dlist.extend(item[1] for item in items)
+        node += dlist
+        return messages + [node]
+
+
+class EbicsDomain(Domain):
+    """Ebics domain."""
+
+    name = 'ebics'
+    label = 'EBICS'
+
+    object_types = {
+        'order': ObjType('order', 'ebics'),
+    }
+
+    directives = {
+        'orders': EbicsOrders,
+    }
+
+    roles = {
+        'order': XRefRole(lowercase=True, warn_dangling=True, 
innernodeclass=nodes.inline),
+    }
+
+    dangling_warnings = {
+        'order': 'undefined EBICS order type: %(target)s',
+    }
+
+    @property
+    def objects(self) -> Dict[Tuple[str, str], Tuple[str, str]]:
+        return self.data.setdefault('objects', {})  # (objtype, name) -> 
docname, labelid
+
+    def clear_doc(self, docname):
+        for key, (fn, _l) in list(self.objects.items()):
+            if fn == docname:
+                del self.objects[key]
+
+    def resolve_xref(self, env, fromdocname, builder, typ, target,
+                     node, contnode):
+        try:
+            info = self.objects[(str(typ), str(target))]
+        except KeyError:
+            return None
+        else:
+            anchor = "ebics-order-{}".format(str(target))
+            title = typ.upper() + ' ' + target
+            return make_refnode(builder, fromdocname, info[0], anchor,
+                                contnode, title)
+
+    def add_object(self, objtype: str, name: str, docname: str, labelid: str) 
-> None:
+        self.objects[objtype, name] = (docname, labelid)
+
+
+def setup(app):
+    app.add_domain(EbicsDomain)
diff --git a/_exts/tslex.py b/_exts/tslex.py
deleted file mode 100644
index 2be6f29..0000000
--- a/_exts/tslex.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from pygments.token import *
-from pygments.lexer import RegexLexer, ExtendedRegexLexer, bygroups, using, \
-     include, this
-import re
-
-class BetterTypeScriptLexer(RegexLexer):
-    """
-    For `TypeScript <https://www.typescriptlang.org/>`_ source code.
-    """
-
-    name = 'TypeScript'
-    aliases = ['ts']
-    filenames = ['*.ts']
-    mimetypes = ['text/x-typescript']
-
-    flags = re.DOTALL
-    tokens = {
-        'commentsandwhitespace': [
-            (r'\s+', Text),
-            (r'<!--', Comment),
-            (r'//.*?\n', Comment.Single),
-            (r'/\*.*?\*/', Comment.Multiline)
-        ],
-        'slashstartsregex': [
-            include('commentsandwhitespace'),
-            (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
-             r'([gim]+\b|\B)', String.Regex, '#pop'),
-            (r'(?=/)', Text, ('#pop', 'badregex')),
-            (r'', Text, '#pop')
-        ],
-        'badregex': [
-            (r'\n', Text, '#pop')
-        ],
-        'typeexp': [
-            (r'[a-zA-Z]+', Keyword.Type),
-            (r'\s+', Text),
-            (r'[|]', Text),
-            (r'\n', Text, "#pop"),
-            (r';', Text, "#pop"),
-            (r'', Text, "#pop"),
-        ],
-        'root': [
-            (r'^(?=\s|/|<!--)', Text, 'slashstartsregex'),
-            include('commentsandwhitespace'),
-            (r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|'
-             r'(<<|>>>?|==?|!=?|[-<>+*%&\|\^/])=?', Operator, 
'slashstartsregex'),
-            (r'[{(\[;,]', Punctuation, 'slashstartsregex'),
-            (r'[})\].]', Punctuation),
-            
(r'(for|in|while|do|break|return|continue|switch|case|default|if|else|'
-             r'throw|try|catch|finally|new|delete|typeof|instanceof|void|'
-             r'this)\b', Keyword, 'slashstartsregex'),
-            (r'(var|let|const|with|function)\b', Keyword.Declaration, 
'slashstartsregex'),
-            
(r'(abstract|boolean|byte|char|class|const|debugger|double|enum|export|'
-             
r'extends|final|float|goto|implements|import|int|interface|long|native|'
-             
r'package|private|protected|public|short|static|super|synchronized|throws|'
-             r'transient|volatile)\b', Keyword.Reserved),
-            (r'(true|false|null|NaN|Infinity|undefined)\b', Keyword.Constant),
-            (r'(Array|Boolean|Date|Error|Function|Math|netscape|'
-             r'Number|Object|Packages|RegExp|String|sun|decodeURI|'
-             r'decodeURIComponent|encodeURI|encodeURIComponent|'
-             r'Error|eval|isFinite|isNaN|parseFloat|parseInt|document|this|'
-             r'window)\b', Name.Builtin),
-            # Match stuff like: module name {...}
-            (r'\b(module)(\s*)(\s*[a-zA-Z0-9_?.$][\w?.$]*)(\s*)',
-             bygroups(Keyword.Reserved, Text, Name.Other, Text), 
'slashstartsregex'),
-            # Match variable type keywords
-            (r'\b(string|bool|number)\b', Keyword.Type),
-            # Match stuff like: constructor
-            (r'\b(constructor|declare|interface|as|AS)\b', Keyword.Reserved),
-            # Match stuff like: super(argument, list)
-            (r'(super)(\s*)\(([a-zA-Z0-9,_?.$\s]+\s*)\)',
-             bygroups(Keyword.Reserved, Text), 'slashstartsregex'),
-            # Match stuff like: function() {...}
-            (r'([a-zA-Z_?.$][\w?.$]*)\(\) \{', Name.Other, 'slashstartsregex'),
-            # Match stuff like: (function: return type)
-            (r'([a-zA-Z0-9_?.$][\w?.$]*)(\s*:\s*)([a-zA-Z0-9_?.$][\w?.$]*)',
-             bygroups(Name.Other, Text, Keyword.Type)),
-            # Match stuff like: type Foo = Bar | Baz
-            (r'\b(type)(\s*)([a-zA-Z0-9_?.$]+)(\s*)(=)(\s*)',
-             bygroups(Keyword.Reserved, Text, Name.Other, Text, Operator, 
Text), 'typeexp'),
-            (r'[$a-zA-Z_][a-zA-Z0-9_]*', Name.Other),
-            (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
-            (r'0x[0-9a-fA-F]+', Number.Hex),
-            (r'[0-9]+', Number.Integer),
-            (r'"(\\\\|\\"|[^"])*"', String.Double),
-            (r"'(\\\\|\\'|[^'])*'", String.Single),
-        ]
-    }
diff --git a/_exts/tsref.py b/_exts/tsref.py
deleted file mode 100644
index 980b9a5..0000000
--- a/_exts/tsref.py
+++ /dev/null
@@ -1,241 +0,0 @@
-"""
-  This file is part of GNU TALER.
-  Copyright (C) 2014, 2015 GNUnet e.V. and INRIA
-  TALER is free software; you can redistribute it and/or modify it under the
-  terms of the GNU Lesser General Public License as published by the Free 
Software
-  Foundation; either version 2.1, or (at your option) any later version.
-  TALER is distributed in the hope that it will be useful, but WITHOUT ANY
-  WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-  A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for more 
details.
-  You should have received a copy of the GNU Lesser General Public License 
along with
-  TALER; see the file COPYING.  If not, see <http://www.gnu.org/licenses/>
-
-  @author Florian Dold
-"""
-
-"""
-This extension adds a new lexer "tsref" for TypeScript, which
-allows reST-style links inside comments (`LinkName`_),
-and semi-automatically adds links to the definition of types.
-
-For type TYPE, a reference to tsref-type-TYPE is added.
-
-Known bugs and limitations:
- - The way the extension works right now interferes wiht
-   Sphinx's caching, the build directory should be cleared
-   before every build.
-"""
-
-
-from pygments.util import get_bool_opt
-from pygments.token import Name, Comment, Token, _TokenType
-from pygments.filter import Filter
-from sphinx.highlighting import PygmentsBridge
-from sphinx.builders.html import StandaloneHTMLBuilder
-from sphinx.pygments_styles import SphinxStyle
-from pygments.formatters import HtmlFormatter
-from docutils import nodes
-from docutils.nodes import make_id
-from sphinx.util import logging
-import re
-import sys
-
-
-logger = logging.getLogger(__name__)
-
-_escape_html_table = {
-    ord('&'): u'&amp;',
-    ord('<'): u'&lt;',
-    ord('>'): u'&gt;',
-    ord('"'): u'&quot;',
-    ord("'"): u'&#39;',
-}
-
-
-class LinkingHtmlFormatter(HtmlFormatter):
-    def __init__(self, **kwargs):
-        super(LinkingHtmlFormatter, self).__init__(**kwargs)
-        self._builder = kwargs['_builder']
-
-    def _fmt(self, value, tok):
-        cls = self._get_css_class(tok)
-        href = tok_getprop(tok, "href")
-        caption = tok_getprop(tok, "caption")
-        content = caption if caption is not None else value
-        if href:
-            value = '<a style="color:inherit;text-decoration:underline" 
href="%s">%s</a>' % (href, content)
-        if cls is None or cls == "":
-            return value
-        return '<span class="%s">%s</span>' % (cls, value)
-
-    def _format_lines(self, tokensource):
-        """
-        Just format the tokens, without any wrapping tags.
-        Yield individual lines.
-        """
-        lsep = self.lineseparator
-        escape_table = _escape_html_table
-
-        line = ''
-        for ttype, value in tokensource:
-            link = get_annotation(ttype, "link")
-
-            parts = value.translate(escape_table).split('\n')
-
-            if len(parts) == 0:
-                # empty token, usually should not happen
-                pass
-            elif len(parts) == 1:
-                # no newline before or after token
-                line += self._fmt(parts[0], ttype)
-            else:
-                line += self._fmt(parts[0], ttype)
-                yield 1, line + lsep
-                for part in parts[1:-1]:
-                    yield 1, self._fmt(part, ttype) + lsep
-                line = self._fmt(parts[-1], ttype)
-
-        if line:
-            yield 1, line + lsep
-
-
-class MyPygmentsBridge(PygmentsBridge):
-    def __init__(self, builder, trim_doctest_flags):
-        self.dest = "html"
-        self.trim_doctest_flags = trim_doctest_flags
-        self.formatter_args = {'style': SphinxStyle, '_builder': builder}
-        self.formatter = LinkingHtmlFormatter
-
-
-class MyHtmlBuilder(StandaloneHTMLBuilder):
-    name = "html-linked"
-    def init_highlighter(self):
-        if self.config.pygments_style is not None:
-            style = self.config.pygments_style
-        elif self.theme:
-            style = self.theme.get_confstr('theme', 'pygments_style', 'none')
-        else:
-            style = 'sphinx'
-        self.highlighter = MyPygmentsBridge(self, 
self.config.trim_doctest_flags)
-
-    def write_doc(self, docname, doctree):
-        self._current_docname = docname
-        super(MyHtmlBuilder, self).write_doc(docname, doctree)
-
-def get_annotation(tok, key):
-    if not hasattr(tok, "kv"):
-        return None
-    return tok.kv.get(key)
-
-
-def copy_token(tok):
-    new_tok = _TokenType(tok)
-    # This part is very fragile against API changes ...
-    new_tok.subtypes = set(tok.subtypes)
-    new_tok.parent = tok.parent
-    return new_tok
-
-
-def tok_setprop(tok, key, value):
-    tokid = id(tok)
-    e = token_props.get(tokid)
-    if e is None:
-        e = token_props[tokid] = (tok, {})
-    _, kv = e
-    kv[key] = value
-
-
-def tok_getprop(tok, key):
-    tokid = id(tok)
-    e = token_props.get(tokid)
-    if e is None:
-        return None
-    _, kv = e
-    return kv.get(key)
-
-
-link_reg = re.compile(r"`([^`<]+)\s*(?:<([^>]+)>)?\s*`_")
-
-# Map from token id to props.
-# Properties can't be added to tokens
-# since they derive from Python's tuple.
-token_props = {}
-
-
-class LinkFilter(Filter):
-    def __init__(self, app, **options):
-        self.app = app
-        Filter.__init__(self, **options)
-
-    def filter(self, lexer, stream):
-        id_to_doc = self.app.env.domaindata.get("_tsref", {})
-        for ttype, value in stream:
-            if ttype in Token.Keyword.Type:
-                defname = make_id('tsref-type-' + value);
-                t = copy_token(ttype)
-                if defname in id_to_doc:
-                    if hasattr(self.app.builder, "_current_docname"):
-                        current_docname = self.app.builder._current_docname
-                    else:
-                        current_docname = "(unknown-doc)"
-                    docname = id_to_doc[defname]
-                    uri = self.app.builder.get_relative_uri(current_docname, 
docname)
-                    href = uri + "#" + defname
-                    tok_setprop(t, "href", href)
-
-                yield t, value
-            elif ttype in Token.Comment:
-                last = 0
-                for m in re.finditer(link_reg, value):
-                    pre = value[last:m.start()]
-                    if pre:
-                        yield ttype, pre
-                    t = copy_token(ttype)
-                    x1, x2 = m.groups()
-                    if x2 is None:
-                        caption = x1.strip()
-                        id = make_id(x1)
-                    else:
-                        caption = x1.strip()
-                        id = make_id(x2)
-                    if id in id_to_doc:
-                        docname = id_to_doc[id]
-                        href = self.app.builder.get_target_uri(docname) + "#" 
+ id
-                        tok_setprop(t, "href", href)
-                        tok_setprop(t, "caption", caption)
-                    else:
-                        logger.warning("unresolved link target in comment: " + 
id)
-                    yield t, m.group(1)
-                    last = m.end()
-                post = value[last:]
-                if post:
-                    yield ttype, post
-            else:
-                yield ttype, value
-
-
-
-def remember_targets(app, doctree):
-    docname = app.env.docname
-    id_to_doc = app.env.domaindata.get("_tsref", None)
-    if id_to_doc is None:
-        id_to_doc = app.env.domaindata["_tsref"] = {}
-    for node in doctree.traverse():
-        if not isinstance(node, nodes.Element):
-            continue
-        ids = node.get("ids")
-        if ids:
-            for id in ids:
-                id_to_doc[id] = docname
-
-
-def setup(app): 
-    from sphinx.highlighting import lexers
-    from pygments.token import Name
-    from pygments.filters import NameHighlightFilter
-    from tslex import BetterTypeScriptLexer
-    lexer = BetterTypeScriptLexer()
-    lexer.add_filter(LinkFilter(app))
-    app.add_lexer('tsref', lexer)
-    app.add_builder(MyHtmlBuilder)
-    app.connect("doctree-read", remember_targets)
diff --git a/_exts/typescriptdomain.py b/_exts/typescriptdomain.py
new file mode 100644
index 0000000..73a5ada
--- /dev/null
+++ b/_exts/typescriptdomain.py
@@ -0,0 +1,545 @@
+"""
+TypeScript domain.
+
+:copyright: Copyright 2019 by Taler Systems SA
+:license: LGPLv3+
+:author: Florian Dold
+"""
+
+import re
+
+from docutils import nodes
+from typing import List, Optional, Iterable, Dict, Tuple
+from typing import cast
+
+from pygments.lexers import get_lexer_by_name
+from pygments.filter import Filter
+from pygments.token import Literal, Text, Operator, Keyword, Name, Number
+from pygments.token import Comment, Token, _TokenType
+from pygments.token import *
+from pygments.lexer import RegexLexer,  bygroups, include
+from pygments.formatters import HtmlFormatter
+
+from docutils import nodes
+from docutils.nodes import Element, Node
+
+from sphinx.roles import XRefRole
+from sphinx.domains import Domain, ObjType, Index
+from sphinx.directives import directives
+from sphinx.util.docutils import SphinxDirective
+from sphinx.util.nodes import make_refnode
+from sphinx.util import logging
+from sphinx.highlighting import PygmentsBridge
+from sphinx.builders.html import StandaloneHTMLBuilder
+from sphinx.pygments_styles import SphinxStyle
+
+logger = logging.getLogger(__name__)
+
+
+class TypeScriptDefinition(SphinxDirective):
+    """
+    Directive for a code block with special highlighting or line numbering
+    settings.
+    """
+
+    has_content = True
+    required_arguments = 1
+    optional_arguments = 0
+    final_argument_whitespace = False
+    option_spec = {
+        'force': directives.flag,
+        'linenos': directives.flag,
+        'dedent': int,
+        'lineno-start': int,
+        'emphasize-lines': directives.unchanged_required,
+        'caption': directives.unchanged_required,
+        'class': directives.class_option,
+    }
+
+    def run(self) -> List[Node]:
+        document = self.state.document
+        code = '\n'.join(self.content)
+        location = self.state_machine.get_source_and_line(self.lineno)
+
+        linespec = self.options.get('emphasize-lines')
+        if linespec:
+            try:
+                nlines = len(self.content)
+                hl_lines = parselinenos(linespec, nlines)
+                if any(i >= nlines for i in hl_lines):
+                    logger.warning(
+                        __('line number spec is out of range(1-%d): %r') %
+                        (nlines, self.options['emphasize-lines']),
+                        location=location
+                    )
+
+                hl_lines = [x + 1 for x in hl_lines if x < nlines]
+            except ValueError as err:
+                return [document.reporter.warning(err, line=self.lineno)]
+        else:
+            hl_lines = None
+
+        if 'dedent' in self.options:
+            location = self.state_machine.get_source_and_line(self.lineno)
+            lines = code.split('\n')
+            lines = dedent_lines(
+                lines, self.options['dedent'], location=location
+            )
+            code = '\n'.join(lines)
+
+        literal = nodes.literal_block(code, code)  # type: Element
+        if 'linenos' in self.options or 'lineno-start' in self.options:
+            literal['linenos'] = True
+        literal['classes'] += self.options.get('class', [])
+        literal['force'] = 'force' in self.options
+        literal['language'] = "tsref"
+        extra_args = literal['highlight_args'] = {}
+        if hl_lines is not None:
+            extra_args['hl_lines'] = hl_lines
+        if 'lineno-start' in self.options:
+            extra_args['linenostart'] = self.options['lineno-start']
+        self.set_source_info(literal)
+
+        caption = self.options.get('caption')
+        if caption:
+            try:
+                literal = container_wrapper(self, literal, caption)
+            except ValueError as exc:
+                return [document.reporter.warning(exc, line=self.lineno)]
+
+        tsid = "tsref-type-" + self.arguments[0]
+        literal['ids'].append(tsid)
+
+        tsname = self.arguments[0]
+        ts = self.env.get_domain('ts')
+        ts.add_object('type', tsname, self.env.docname, tsid)
+
+        return [literal]
+
+
+class TypeScriptDomain(Domain):
+    """TypeScript domain."""
+
+    name = 'ts'
+    label = 'TypeScript'
+
+    directives = {
+        'def': TypeScriptDefinition,
+    }
+
+    roles = {
+        'type':
+        XRefRole(
+            lowercase=False, warn_dangling=True, innernodeclass=nodes.inline
+        ),
+    }
+
+    dangling_warnings = {
+        'type': 'undefined TypeScript type: %(target)s',
+    }
+
+    def resolve_xref(
+        self, env, fromdocname, builder, typ, target, node, contnode
+    ):
+        try:
+            info = self.objects[(str(typ), str(target))]
+        except KeyError:
+            logger.warn("type {}/{} not found".format(typ, target))
+            return None
+        else:
+            anchor = "tsref-type-{}".format(str(target))
+            title = typ.upper() + ' ' + target
+            return make_refnode(
+                builder, fromdocname, info[0], anchor, contnode, title
+            )
+
+    def resolve_any_xref(
+        self, env, fromdocname, builder, target, node, contnode
+    ):
+        """Resolve the pending_xref *node* with the given *target*.
+
+        The reference comes from an "any" or similar role, which means that 
Sphinx
+        don't know the type.
+
+        For now sphinxcontrib-httpdomain doesn't resolve any xref nodes.
+
+        :return:
+           list of tuples ``('domain:role', newnode)``, where ``'domain:role'``
+           is the name of a role that could have created the same reference,
+        """
+        ret = []
+        try:
+            info = self.objects[("type", str(target))]
+        except KeyError:
+            pass
+        else:
+            anchor = "tsref-type-{}".format(str(target))
+            title = "TYPE" + ' ' + target
+            node = make_refnode(
+                builder, fromdocname, info[0], anchor, contnode, title
+            )
+            ret.append(("ts:type", node))
+        return ret
+
+    @property
+    def objects(self) -> Dict[Tuple[str, str], Tuple[str, str]]:
+        return self.data.setdefault(
+            'objects', {}
+        )  # (objtype, name) -> docname, labelid
+
+    def add_object(
+        self, objtype: str, name: str, docname: str, labelid: str
+    ) -> None:
+        self.objects[objtype, name] = (docname, labelid)
+
+
+class BetterTypeScriptLexer(RegexLexer):
+    """
+    For `TypeScript <https://www.typescriptlang.org/>`_ source code.
+    """
+
+    name = 'TypeScript'
+    aliases = ['ts']
+    filenames = ['*.ts']
+    mimetypes = ['text/x-typescript']
+
+    flags = re.DOTALL
+    tokens = {
+        'commentsandwhitespace': [(r'\s+', Text), (r'<!--', Comment),
+                                  (r'//.*?\n', Comment.Single),
+                                  (r'/\*.*?\*/', Comment.Multiline)],
+        'slashstartsregex': [
+            include('commentsandwhitespace'),
+            (
+                r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
+                r'([gim]+\b|\B)', String.Regex, '#pop'
+            ), (r'(?=/)', Text, ('#pop', 'badregex')), (r'', Text, '#pop')
+        ],
+        'badregex': [(r'\n', Text, '#pop')],
+        'typeexp': [
+            (r'[a-zA-Z0-9_?.$]+', Keyword.Type),
+            (r'\s+', Text),
+            (r'[|]', Text),
+            (r'\n', Text, "#pop"),
+            (r';', Text, "#pop"),
+            (r'', Text, "#pop"),
+        ],
+        'root': [
+            (r'^(?=\s|/|<!--)', Text, 'slashstartsregex'),
+            include('commentsandwhitespace'),
+            (
+                r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|'
+                r'(<<|>>>?|==?|!=?|[-<>+*%&\|\^/])=?', Operator,
+                'slashstartsregex'
+            ),
+            (r'[{(\[;,]', Punctuation, 'slashstartsregex'),
+            (r'[})\].]', Punctuation),
+            (
+                
r'(for|in|while|do|break|return|continue|switch|case|default|if|else|'
+                r'throw|try|catch|finally|new|delete|typeof|instanceof|void|'
+                r'this)\b', Keyword, 'slashstartsregex'
+            ),
+            (
+                r'(var|let|const|with|function)\b', Keyword.Declaration,
+                'slashstartsregex'
+            ),
+            (
+                
r'(abstract|boolean|byte|char|class|const|debugger|double|enum|export|'
+                
r'extends|final|float|goto|implements|import|int|interface|long|native|'
+                
r'package|private|protected|public|short|static|super|synchronized|throws|'
+                r'transient|volatile)\b', Keyword.Reserved
+            ),
+            (r'(true|false|null|NaN|Infinity|undefined)\b', Keyword.Constant),
+            (
+                r'(Array|Boolean|Date|Error|Function|Math|netscape|'
+                r'Number|Object|Packages|RegExp|String|sun|decodeURI|'
+                r'decodeURIComponent|encodeURI|encodeURIComponent|'
+                r'Error|eval|isFinite|isNaN|parseFloat|parseInt|document|this|'
+                r'window)\b', Name.Builtin
+            ),
+            # Match stuff like: module name {...}
+            (
+                r'\b(module)(\s*)(\s*[a-zA-Z0-9_?.$][\w?.$]*)(\s*)',
+                bygroups(Keyword.Reserved, Text, Name.Other,
+                         Text), 'slashstartsregex'
+            ),
+            # Match variable type keywords
+            (r'\b(string|bool|number)\b', Keyword.Type),
+            # Match stuff like: constructor
+            (r'\b(constructor|declare|interface|as|AS)\b', Keyword.Reserved),
+            # Match stuff like: super(argument, list)
+            (
+                r'(super)(\s*)\(([a-zA-Z0-9,_?.$\s]+\s*)\)',
+                bygroups(Keyword.Reserved, Text), 'slashstartsregex'
+            ),
+            # Match stuff like: function() {...}
+            (r'([a-zA-Z_?.$][\w?.$]*)\(\) \{', Name.Other, 'slashstartsregex'),
+            # Match stuff like: (function: return type)
+            (
+                r'([a-zA-Z0-9_?.$][\w?.$]*)(\s*:\s*)',
+                bygroups(Name.Other, Text), 'typeexp'
+            ),
+            # Match stuff like: type Foo = Bar | Baz
+            (
+                r'\b(type)(\s*)([a-zA-Z0-9_?.$]+)(\s*)(=)(\s*)',
+                bygroups(
+                    Keyword.Reserved, Text, Name.Other, Text, Operator, Text
+                ), 'typeexp'
+            ),
+            (r'[$a-zA-Z_][a-zA-Z0-9_]*', Name.Other),
+            (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
+            (r'0x[0-9a-fA-F]+', Number.Hex),
+            (r'[0-9]+', Number.Integer),
+            (r'"(\\\\|\\"|[^"])*"', String.Double),
+            (r"'(\\\\|\\'|[^'])*'", String.Single),
+        ]
+    }
+
+
+# Map from token id to props.
+# Properties can't be added to tokens
+# since they derive from Python's tuple.
+token_props = {}
+
+
+class LinkFilter(Filter):
+    def __init__(self, app, **options):
+        self.app = app
+        Filter.__init__(self, **options)
+
+    def _filter_one_literal(self, ttype, value):
+        last = 0
+        for m in re.finditer(literal_reg, value):
+            pre = value[last:m.start()]
+            if pre:
+                yield ttype, pre
+            t = copy_token(ttype)
+            tok_setprop(t, "is_literal", True)
+            yield t, m.group(1)
+            last = m.end()
+        post = value[last:]
+        if post:
+            yield ttype, post
+
+
+    def filter(self, lexer, stream):
+        for ttype, value in stream:
+            if ttype in Token.Keyword.Type:
+                t = copy_token(ttype)
+                tok_setprop(t, "xref", value.strip())
+                tok_setprop(t, "is_identifier", True)
+                yield t, value
+            elif ttype in Token.Comment:
+                last = 0
+                for m in re.finditer(link_reg, value):
+                    pre = value[last:m.start()]
+                    if pre:
+                        yield from self._filter_one_literal(ttype, pre)
+                    t = copy_token(ttype)
+                    x1, x2 = m.groups()
+                    x0 = m.group(0)
+                    if x2 is None:
+                        caption = x1.strip()
+                        xref = x1.strip()
+                    else:
+                        caption = x1.strip()
+                        xref = x2.strip()
+                    tok_setprop(t, "xref", xref)
+                    tok_setprop(t, "caption", caption)
+                    if x0.endswith("_"):
+                        tok_setprop(t, "trailing_underscore", True)
+                    yield t, m.group(1)
+                    last = m.end()
+                post = value[last:]
+                if post:
+                    yield from self._filter_one_literal(ttype, post)
+            else:
+                yield ttype, value
+
+
+_escape_html_table = {
+    ord('&'): u'&amp;',
+    ord('<'): u'&lt;',
+    ord('>'): u'&gt;',
+    ord('"'): u'&quot;',
+    ord("'"): u'&#39;',
+}
+
+
+class LinkingHtmlFormatter(HtmlFormatter):
+    def __init__(self, **kwargs):
+        super(LinkingHtmlFormatter, self).__init__(**kwargs)
+        self._builder = kwargs['_builder']
+        self._bridge = kwargs['_bridge']
+
+    def _get_value(self, value, tok):
+        xref = tok_getprop(tok, "xref")
+        caption = tok_getprop(tok, "caption")
+
+        if tok_getprop(tok, "is_literal"):
+            return '<span style="font-weight: bolder">%s</span>' % (value,)
+
+        if tok_getprop(tok, "trailing_underscore"):
+            logger.warn(
+                "{}:{}: code block contains xref to '{}' with unsupported 
trailing underscore"
+                .format(self._bridge.path, self._bridge.line, xref)
+            )
+
+        if tok_getprop(tok, "is_identifier"):
+            if xref.startswith('"'):
+                return value
+            if xref in ("number", "object", "string", "boolean", "any", 
"true", "false"):
+                return value
+
+        if xref is None:
+            return value
+        content = caption if caption is not None else value
+        ts = self._builder.env.get_domain('ts')
+        r1 = ts.objects.get(("type", xref), None)
+        if r1 is not None:
+            rel_uri = self._builder.get_relative_uri(self._bridge.docname, 
r1[0]) + "#" + r1[1]
+            return '<a style="color:inherit;text-decoration:underline" 
href="%s">%s</a>' % (
+                rel_uri, content
+            )
+
+        std = self._builder.env.get_domain('std')
+        r2 = std.labels.get(xref.lower(), None)
+        if r2 is not None:
+            rel_uri = self._builder.get_relative_uri(self._bridge.docname, 
r2[0]) + "#" + r2[1]
+            return '<a style="color:inherit;text-decoration:underline" 
href="%s">%s</a>' % (
+                rel_uri, content
+            )
+        r3 = std.anonlabels.get(xref.lower(), None)
+        if r3 is not None:
+            rel_uri = self._builder.get_relative_uri(self._bridge.docname, 
r3[0]) + "#" + r3[1]
+            return '<a style="color:inherit;text-decoration:underline" 
href="%s">%s</a>' % (
+                rel_uri, content
+            )
+
+        logger.warn("{}:{}: code block contains unresolved xref 
'{}'".format(self._bridge.path, self._bridge.line, xref))
+
+        return value
+
+
+    def _fmt(self, value, tok):
+        cls = self._get_css_class(tok)
+        value = self._get_value(value, tok)
+        if cls is None or cls == "":
+            return value
+        return '<span class="%s">%s</span>' % (cls, value)
+
+    def _format_lines(self, tokensource):
+        """
+        Just format the tokens, without any wrapping tags.
+        Yield individual lines.
+        """
+        lsep = self.lineseparator
+        escape_table = _escape_html_table
+
+        line = ''
+        for ttype, value in tokensource:
+            link = get_annotation(ttype, "link")
+
+            parts = value.translate(escape_table).split('\n')
+
+            if len(parts) == 0:
+                # empty token, usually should not happen
+                pass
+            elif len(parts) == 1:
+                # no newline before or after token
+                line += self._fmt(parts[0], ttype)
+            else:
+                line += self._fmt(parts[0], ttype)
+                yield 1, line + lsep
+                for part in parts[1:-1]:
+                    yield 1, self._fmt(part, ttype) + lsep
+                line = self._fmt(parts[-1], ttype)
+
+        if line:
+            yield 1, line + lsep
+
+
+class MyPygmentsBridge(PygmentsBridge):
+    def __init__(self, builder, trim_doctest_flags):
+        self.dest = "html"
+        self.trim_doctest_flags = trim_doctest_flags
+        self.formatter_args = {
+            'style': SphinxStyle,
+            '_builder': builder,
+            '_bridge': self
+        }
+        self.formatter = LinkingHtmlFormatter
+        self.builder = builder
+        self.path = None
+        self.line = None
+        self.docname = None
+
+    def highlight_block(
+        self, source, lang, opts=None, force=False, location=None, **kwargs
+    ):
+        docname, line = location
+        self.line = line
+        self.path = self.builder.env.doc2path(docname)
+        self.docname = docname
+        return super().highlight_block(
+            source, lang, opts, force, location, **kwargs
+        )
+
+
+class MyHtmlBuilder(StandaloneHTMLBuilder):
+    name = "html-linked"
+
+    def init_highlighter(self):
+        if self.config.pygments_style is not None:
+            style = self.config.pygments_style
+        elif self.theme:
+            style = self.theme.get_confstr('theme', 'pygments_style', 'none')
+        else:
+            style = 'sphinx'
+        self.highlighter = MyPygmentsBridge(
+            self, self.config.trim_doctest_flags
+        )
+
+
+def get_annotation(tok, key):
+    if not hasattr(tok, "kv"):
+        return None
+    return tok.kv.get(key)
+
+
+def copy_token(tok):
+    new_tok = _TokenType(tok)
+    # This part is very fragile against API changes ...
+    new_tok.subtypes = set(tok.subtypes)
+    new_tok.parent = tok.parent
+    return new_tok
+
+
+def tok_setprop(tok, key, value):
+    tokid = id(tok)
+    e = token_props.get(tokid)
+    if e is None:
+        e = token_props[tokid] = (tok, {})
+    _, kv = e
+    kv[key] = value
+
+
+def tok_getprop(tok, key):
+    tokid = id(tok)
+    e = token_props.get(tokid)
+    if e is None:
+        return None
+    _, kv = e
+    return kv.get(key)
+
+
+link_reg = re.compile(r"(?<!`)`([^`<]+)\s*(?:<([^>]+)>)?\s*`_?")
+literal_reg = re.compile(r"``([^`]+)``")
+
+
+def setup(app):
+    lexer = BetterTypeScriptLexer()
+    lexer.add_filter(LinkFilter(app))
+    app.add_lexer('tsref', lexer)
+    app.add_domain(TypeScriptDomain)
+    app.add_builder(MyHtmlBuilder)
diff --git a/conf.py b/conf.py
index 779f372..4c81529 100644
--- a/conf.py
+++ b/conf.py
@@ -50,7 +50,8 @@ needs_sphinx = '1.3'
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
 extensions = [
-    'tsref',
+    'ebicsdomain',
+    'typescriptdomain',
     'taler_sphinx_theme',
     'sphinx.ext.todo',
     'sphinx.ext.imgmath',
@@ -102,7 +103,7 @@ exclude_patterns = ['_build', '_exts', 'cf', 'prebuilt']
 
 # The reST default role (used for this markup: `text`) to use for all
 # documents.
-#default_role = None
+default_role = "ts:type"
 
 # If true, '()' will be appended to :func: etc. cross-reference text.
 #add_function_parentheses = True
diff --git a/libeufin/api-sandbox.rst b/libeufin/api-sandbox.rst
index c020ccc..df0be59 100644
--- a/libeufin/api-sandbox.rst
+++ b/libeufin/api-sandbox.rst
@@ -51,14 +51,15 @@ HTTP API
 
   Get information about a customer.
 
-  
-  .. code-block:: tsref
+  .. ts:def:: CustomerInfo
 
     interface CustomerInfo {
       ebicsInfo?: CustomerEbicsInfo;
       finTsInfo?: CustomerFinTsInfo;
     }
 
+  .. ts:def:: CustomerEbicsInfo
+
     interface CustomerEbicsInfo {
       ebicsHostId: string;
       ebicsParterId: string;
@@ -68,6 +69,10 @@ HTTP API
       subscriberInitializationState: "NEW" | "PARTIALLY_INITIALIZED_INI" | 
"PARTIALLY_INITIALIZED_HIA" | "READY" | "INITIALIZED";
     }
 
+  .. ts:def:: CustomerFinTsInfo
+    
+    // TODO
+
 .. http:post:: /admin/customers/:id/ebics/keyletter
 
   Accept the information from the customer's ("virtual") INI-Letter and 
HIA-Letter
diff --git a/libeufin/ebics.rst b/libeufin/ebics.rst
index a92e4df..ebc6b8e 100644
--- a/libeufin/ebics.rst
+++ b/libeufin/ebics.rst
@@ -122,7 +122,7 @@ EBICS Glossary
 
   UNIFI
     UNIversal Financial Industry message scheme.  Sometimes used to refer to
-    :tern:`ISO 20022`.
+    :term:`ISO 20022`.
 
   Segmentation
     EBICS implements its own protocol-level segmentation of business-related 
messages.
@@ -166,92 +166,123 @@ bank-technical order types.  This convention isn't 
always followed consistently
 Relevant Order Types
 --------------------
 
-BTD
-  **Only EBICS3.0+**. Business Transaction Format Download.
-  Administrative order type to download a file, described in more detail by 
the BTF structure
+.. ebics:orders::
 
-BTU
-  **Only EBICS3.0+**. Business Transaction Format Upload.
-  Administrative order type to upload a file, described in more detail by the 
BTF structure
+  BTD
+    **Only EBICS3.0+**. Business Transaction Format Download.
+    Administrative order type to download a file, described in more detail by 
the BTF structure
 
-C52
-  **Before EBICS 3.0**.  Download bank-to-customer account report.
+  BTU
+    **Only EBICS3.0+**. Business Transaction Format Upload.
+    Administrative order type to upload a file, described in more detail by 
the BTF structure
 
-C53
-  **Before EBICS 3.0**.  Download bank-to-customer statement report.
+  C52
+    **Before EBICS 3.0**.  Download bank-to-customer account report (intra-day 
information).
 
-FUL
-  **Before EBICS 3.0, France**.  File Upload.  Mainly used by France-style 
EBICS.
+  C53
+    **Before EBICS 3.0**.  Download bank-to-customer statement report (prior 
day bank statement).
 
-FDL
-  **Before EBICS 3.0, France**.  File Download.  Mainly used by France-style 
EBICS.
+  CRZ
+    Type: Download.
 
-HIA
-  Transmission of the subscriber keys for (1) identification and 
authentication and (2)
-  encryption within the framework of subscriber initialisation.
+    Fetch payment status report (pain.002)
 
-HPB
-  Query the three RSA keys of the financial institute.
+  FUL
+    **Before EBICS 3.0, France**.  File Upload.  Mainly used by France-style 
EBICS.
 
-HPD
-  Host Parameter Data.  Used to query the capabilities of the financial 
institution.
+  FDL
+    **Before EBICS 3.0, France**.  File Download.  Mainly used by France-style 
EBICS.
 
-INI
-  Transmission of the subscriber keys for bank-technical electronic signatures.
+  HIA
+    Transmission of the subscriber keys for (1) identification and 
authentication and (2)
+    encryption within the framework of subscriber initialisation.
 
-HAC
-  Customer acknowledgement.  Allows downloading a detailed "log" of the 
activities
-  done via EBICS, in the pain.002 XML format.
+  HPB
+    Query the three RSA keys of the financial institute.
 
-HCS
-  Change keys without having to send a new INI/HIA letter.
+  HPD
+    Host Parameter Data.  Used to query the capabilities of the financial 
institution.
 
-SPR
-  Suspend a subscriber.  Used when a key compromise is suspected.
+  INI
+    Transmission of the subscriber keys for bank-technical electronic 
signatures.
 
-HCS
-  Change the subscribers keys (``K_SIG``, ``K_IA`` and ``K_ENC``).
+  HAC
+    Customer acknowledgement.  Allows downloading a detailed "log" of the 
activities
+    done via EBICS, in the pain.002 XML format.
+
+  HCS
+    Change keys without having to send a new INI/HIA letter.
+
+  SPR
+    Suspend a subscriber.  Used when a key compromise is suspected.
+
+  HCS
+    Change the subscribers keys (``K_SIG``, ``K_IA`` and ``K_ENC``).
 
 Other Order Types
 -----------------
 
 The following order types are, for now, not relevant for LibEuFin:
 
-H3K
-  Send all three RSA key pairs for initialization at once, accompanied
-  by a CA certificate for the keys.  This is (as far as we know) used in 
France,
-  but not used by any German banks.  When initializing a subscriber with H3K,
-  no INI and HIA letters are required.
 
-HVE
-  Host Verification of Electronic Signature.  Used to submit an electronic 
signature separately
-  from a previously uploaded order.
+.. ebics:orders::
+
+  AZV
+    Type: Upload.
+
+    From German "Auslandszahlungsverkehr".  Used to submit
+    cross-border payments in a legacy format.
+
+  CDZ
+    Type: Download.
+
+    Download payment status report for direct debit.
+
+  H3K
+    Type: Upload.
+
+    Send all three RSA key pairs for initialization at once, accompanied
+    by a CA certificate for the keys.  This is (as far as we know) used in 
France,
+    but not used by any German banks.  When initializing a subscriber with H3K,
+    no INI and HIA letters are required.
 
-HVD
-  Retrieve VEU state.
+  HVE
+    Type: Download.
 
-HVU
-  Retrieve VEU overview.
+    Host Verification of Electronic Signature.  Used to submit an electronic 
signature separately
+    from a previously uploaded order.
 
-HVS
-  Cancel Previous Order (from German "Storno").  Used to submit an electronic 
signature separately
-  from a previously uploaded order.
+  HVD
+    Retrieve VEU state.
 
-HSA
-  Order to migrate from FTAM to EBICS.  **Removed in EBICS 3.0**.
+  HVU
+    Retrieve VEU overview.
 
-PUB
-  Change of the bank-technical key (``K_SIG``).
-  Superseeded by HSA.
+  HVS
+    Cancel Previous Order (from German "Storno").  Used to submit an 
electronic signature separately
+    from a previously uploaded order.
 
-HCA
-  Change the identification and authentication key as well as the encryption 
key (``K_IA`` and ``K_ENC``).
-  Superseeded by HCS.
+  HSA
+    Order to migrate from FTAM to EBICS.  **Removed in EBICS 3.0**.
 
-PTK
-  Download a human-readable protocol of operations done via EBICS.
-  Mandatory for German banks.  Superseeded by the machine-readable
-  HAC order type.
+  PUB
+    Type: Upload.
+
+    Change of the bank-technical key (``K_SIG``).
+    Superseeded by HSA.
+
+  HCA
+    Type: Upload.
+
+    Change the identification and authentication key as well as the encryption 
key (``K_IA`` and ``K_ENC``).
+    Superseeded by HCS.
+
+  PTK
+    Type: Download.
+
+    Download a human-readable protocol of operations done via EBICS.
+    Mandatory for German banks.  Superseeded by the machine-readable
+    HAC order type.
 
 
 
@@ -262,6 +293,15 @@ ISO 20022
 ---------
 
 ISO 20022 is XML-based and defines message format for many finance-related 
activities.
+
+ISO 20022 messages are identified by a message identifier in the following 
format:
+
+  <business area> . <message identifier> . <variant> . <version>
+
+Some financial instututions (such as the Deutsche Kreditwirtschaft) may 
decided to use
+a subset of elements/attributes in a message, this is what the ``<variant>`` 
part is for.
+"Standard" ISO20022 have variant ``001``.
+
 The most important message types for LibEuFin are:
 
 camt - Cash Management
@@ -270,6 +310,22 @@ camt - Cash Management
 pain - Payment Initiation
   Particularly pain.001 (CustomerCreditTransferInitiation) to initiate a 
payment and
   pain.002 (CustomerPaymentStatus) to get the status of a payment.
+
+
+SWIFT Proprietary
+=================
+
+SWIFT Proprietary messages are in a custom textual format.
+The relevant messages are MT940 and MT942
+
+* MT940 contains *pre-advice*, in the sense that transactions in it might still
+  change or be reversed
+* MT942 contains the settled transactions by the end of the day
+
+SWIFT will eventually transition from MT messages to ISO20022,
+but some banks might still only give us account statements in the old
+SWIFT format.
+
   
 
 Key Management
@@ -288,13 +344,35 @@ One subscriber *may* use three different key pairs for 
these purposes.
 The identification and authentication key pair may be the same as the 
encryption key pair.
 The bank-technical key pair may not be used for any other purpose..
 
-MT940 vs MT942
-==============
 
-* MT942 contains *pre-advice*, in the sense that transactions in it might 
still change
-  or be reversed
-* M942 contains the settled transactions by the end of the day
+Real-time Transactions
+======================
+
+
+Bank Support
+============
+
+All German banks that are part of the Deutsche Kreditwirtschaft *must* support 
EBICS.
+
+The exact subset of EBICS order types must be agreed on contractually by the 
bank and customer.
+The following subsections list the message types that we know are supported by 
particular banks.
+
+GLS Bank
+--------
+
+According to publicly available `forms
+<https://www.gls-laden.de/media/pdf/f1/c6/f2/nderungsauftrag.pdf>`_, GLS Bank
+supports the following order types:
+
+ * :ebics:order:`PTK`
+ * :ebics:order:`CDZ`
+ * ... and mandatory administrative messages
+
+
+HypoVereinsbank
+---------------
 
+See `this document 
<https://www.hypovereinsbank.de/content/dam/hypovereinsbank/shared/pdf/Auftragsarten_Internet_Nov2018_181118.pdf>`_.
 
 Standards and Resources
 =======================
@@ -330,6 +408,10 @@ EBICS Compendium
 The `EBICS Compendium 
<https://www.ppi.de/en/payments/ebics/ebics-compendium/>`_ has some additional 
info on EBICS.
 It is published by a company that sells a proprietary EBICS server 
implementation.
 
+Others
+------
 
+* `<https://wiki.windata.de/index.php?title=EBICS-Auftragsarten>`_
+* `<https://www.gls-laden.de/media/pdf/f1/c6/f2/nderungsauftrag.pdf>`_
 
 

-- 
To stop receiving notification emails like this one, please contact
address@hidden.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]