How to use the f90nml.tokenizer.Tokenizer function in f90nml

To help you get started, we’ve selected a few f90nml examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github marshallward / f90nml / f90nml / parser.py View on Github external
def _readstream(self, nml_file, nml_patch_in=None):
        """Parse an input stream containing a Fortran namelist."""
        nml_patch = nml_patch_in if nml_patch_in is not None else Namelist()

        tokenizer = Tokenizer()
        tokenizer.comment_tokens = self.comment_tokens
        f90lex = []
        for line in nml_file:
            toks = tokenizer.parse(line)
            while tokenizer.prior_delim:
                new_toks = tokenizer.parse(next(nml_file))

                # Skip empty lines
                if not new_toks:
                    continue

                # The tokenizer always pre-tokenizes the whitespace (leftover
                # behaviour from Fortran source parsing) so this must be added
                # manually.
                if new_toks[0].isspace():
                    toks[-1] += new_toks.pop(0)
github marshallward / f90nml / f90nml / tokenizer.py View on Github external
elif self.char in self.comment_tokens or self.group_token is None:
                # Abort the iteration and build the comment token
                word = line[self.idx:-1]
                self.char = '\n'

            elif self.char in '"\'' or self.prior_delim:
                word = self.parse_string()

            elif self.char in Tokenizer.punctuation:
                word = self.char
                self.update_chars()

            else:
                while (not self.char.isspace()
                       and self.char not in Tokenizer.punctuation):
                    word += self.char
                    self.update_chars()

            tokens.append(word)

        return tokens