How to use the pyperf.add_runs function in pyperf

To help you get started, we’ve selected a few pyperf examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github python / pyperformance / pyperformance / cli_run.py View on Github external
executable = sys.executable
    if not os.path.isabs(executable):
        print("ERROR: \"%s\" is not an absolute path" % executable)
        sys.exit(1)
    bench_funcs, bench_groups, should_run = get_benchmarks_to_run(options)
    cmd_prefix = [executable]
    suite, errors = run_benchmarks(bench_funcs, should_run, cmd_prefix, options)

    if not suite:
        print("ERROR: No benchmark was run")
        sys.exit(1)

    if options.output:
        suite.dump(options.output)
    if options.append:
        pyperf.add_runs(options.append, suite)
    display_benchmark_suite(suite)

    if errors:
        print("%s benchmarks failed:" % len(errors))
        for name in errors:
            print("- %s" % name)
        print()
        sys.exit(1)
github vstinner / pyperf / pyperf / _runner.py View on Github external
bench.dump(wfile)
        else:
            lines = format_benchmark(bench,
                                     checks=checks,
                                     metadata=args.metadata,
                                     dump=args.dump,
                                     stats=args.stats,
                                     hist=args.hist,
                                     show_name=self._show_name)
            for line in lines:
                print(line)

            sys.stdout.flush()

        if args.append:
            pyperf.add_runs(args.append, bench)

        if args.output:
            if self._worker_task >= 1:
                pyperf.add_runs(args.output, bench)
            else:
                bench.dump(args.output)
github vstinner / pyperf / pyperf / _runner.py View on Github external
metadata=args.metadata,
                                     dump=args.dump,
                                     stats=args.stats,
                                     hist=args.hist,
                                     show_name=self._show_name)
            for line in lines:
                print(line)

            sys.stdout.flush()

        if args.append:
            pyperf.add_runs(args.append, bench)

        if args.output:
            if self._worker_task >= 1:
                pyperf.add_runs(args.output, bench)
            else:
                bench.dump(args.output)