ooni-probe-cli/pkg/oonimkall/taskrunner.go
Simone Basso 9cdca4137d
forwardport: pull the patches mentioned in ooni/probe#1908 (#629)
* [forwardport] fix(oonimkall): make logger used by tasks unit testable (#623)

This diff forward ports e4b04642c51e7461728b25941624e1b97ef0ec83.

Reference issue: https://github.com/ooni/probe/issues/1903

* [forwardport] feat(oonimkall): improve taskEmitter testability (#624)

This diff forward ports 3e0f01a389c1f4cdd7878ec151aff91870a0bdff.

1. rename eventemitter{,_test}.go => taskemitter{,_test}.go because
the new name is more proper after we merged the internal/task package
inside of the oonimkall package;

2. rename runner.go's `run` function to `runTask`;

3. modify `runTask` to use the new `taskEmitterUsingChan` abstraction
on which we will spend more works in a later point of this list;

4. introduce `runTaskWithEmitter` factory that is called by `runTask`
and allows us to more easily write unit tests;

5. acknowledge that `runner` was not using its `out` field;

6. use the new `taskEmitterWrapper` in `newRunner`;

7. acknowledge that `runnerCallbacks` could use a generic
`taskEmitter` as field type rather than a specific type;

8. rewrite tests to use `runTaskWithEmitter` which leads to
simpler code that does not require a goroutine;

9. acknowledge that the code has been ignoring the `DisabledEvents`
settings for quite some time, so stop supporting it;

10. refactor the `taskEmitter` implementation to be like:

    1. we still have the `taskEmitter` interface;

    2. `taskEmitterUsingChan` wraps the channel and allows for
    emitting events using the channel;

    3. `taskEmitterUsingChan` owns an `eof` channel that is
    closed by `Close` (which is idempotent) and signals we
    should be stop emitting;

    4. make sure `runTask` creates a `taskEmitterUsingChan`
    and calls its `Close` method when done;

    5. completely remove the code for disabling events
    since the code was actually ignoring the stting;

    6. add a `taskEmitterWrapper` that adds common functions
    for emitting events to _any_ `taskWrapper`;

    7. write unit tests for `taskEmitterUsingChan` and
    for `taskEmitterWrapper`;

11. acknowledge that the abstraction we need for testing is
actually a thread-safe thing that collects events into a
vector containing events and refactor all tests accordingly.

See https://github.com/ooni/probe/issues/1903

* [forwardport] refactor(oonimkall): make the runner unit-testable (#625)

This diff forward ports 9423947faf6980d92d2fe67efe3829e8fef76586.

See https://github.com/ooni/probe/issues/1903

* [forwardport] feat(oonimkall): write unit tests for the runner component (#626)

This diff forward ports 35dd0e3788b8fa99c541452bbb5e0ae4871239e1.

Forward porting note: compared to 35dd0e3788b8fa99c541452bbb5e0ae4871239e1,
the diff I'm committing here is slightly different. In `master` we do not
have the case where a measurement fails and a measurement is returned, thus
I needed to adapt the test to become like this:

```diff
diff --git a/pkg/oonimkall/runner_internal_test.go b/pkg/oonimkall/runner_internal_test.go
index 334b574..84c7436 100644
--- a/pkg/oonimkall/runner_internal_test.go
+++ b/pkg/oonimkall/runner_internal_test.go
@@ -568,15 +568,6 @@ func TestTaskRunnerRun(t *testing.T) {
                }, {
                        Key:   failureMeasurement,
                        Count: 1,
-               }, {
-                       Key:   measurement,
-                       Count: 1,
-               }, {
-                       Key:   statusMeasurementSubmission,
-                       Count: 1,
-               }, {
-                       Key:   statusMeasurementDone,
-                       Count: 1,
                }, {
                        Key:   statusEnd,
                        Count: 1,
```

I still need to write more assertions for each emitted event
but the code we've here is already a great starting point.

See https://github.com/ooni/probe/issues/1903

* [forwardport] refactor(oonimkall): merge files, use proper names, zap unneeded integration tests (#627)

This diff forward ports f894427d24edc9a03fc78306d0093e7b51c46c25.

Forward porting note: this diff is slightly different from the original
mentioned above because it carries forward changes mentioned in the
previous diff caused by a different way of handling a failed measurement
in the master branch compared to the release/3.11 branch.

Move everything that looked like "task's model" inside of the
taskmodel.go file, for consistency.

Make sure it's clear some variables are event types.

Rename the concrete `runner` as `runnerForTask`.

Also, remove now-unnecessary (and flaky!) integration tests
for the `runnerForTask` type.

While there, notice there were wrong URLs that were generated
during the probe-engine => probe-cli move and fix them.

See https://github.com/ooni/probe/issues/1903

* [forwardport] refactor(oonimkall): we can simplify StartTask tests (#628)

This diff forward ports dcf2986c2032d8185d58d24130a7f2c2d61ef2fb.

* refactor(oonimkall): we can simplify StartTask tests

We have enough checks for runnerForTask. So we do not need to
duplicate them when checking for StartTask.

While there, refactor how we start tasks to remove the need for
extra runner functions.

This is the objective I wanted to achieve for oonimkall:

1. less duplicate tests, and

2. more unit tests (which are less flaky)

At this point, we're basically done (pending forwardporting to
master) with https://github.com/ooni/probe/issues/1903.

* fix(oonimkall): TestStartTaskGood shouldn't cancel the test

This creates a race condition where the test may fail if we cannot
complete the whole "Example" test in less than one second.

This should explain the build failures I've seen so far and why
I didn't see those failures when running locally.
2021-12-02 12:47:07 +01:00

284 lines
8.7 KiB
Go

package oonimkall
import (
"context"
"encoding/json"
"fmt"
"net/url"
"time"
"github.com/ooni/probe-cli/v3/internal/engine"
"github.com/ooni/probe-cli/v3/internal/engine/model"
"github.com/ooni/probe-cli/v3/internal/runtimex"
)
// runnerForTask runs a specific task
type runnerForTask struct {
emitter *taskEmitterWrapper
kvStoreBuilder taskKVStoreFSBuilder
sessionBuilder taskSessionBuilder
settings *settings
}
var _ taskRunner = &runnerForTask{}
// newRunner creates a new task runner
func newRunner(settings *settings, emitter taskEmitter) *runnerForTask {
return &runnerForTask{
emitter: &taskEmitterWrapper{emitter},
kvStoreBuilder: &taskKVStoreFSBuilderEngine{},
sessionBuilder: &taskSessionBuilderEngine{},
settings: settings,
}
}
// failureInvalidVersion is the failure returned when Version is invalid
const failureInvalidVersion = "invalid Settings.Version number"
func (r *runnerForTask) hasUnsupportedSettings() bool {
if r.settings.Version < taskABIVersion {
r.emitter.EmitFailureStartup(failureInvalidVersion)
return true
}
return false
}
func (r *runnerForTask) newsession(ctx context.Context, logger model.Logger) (taskSession, error) {
kvstore, err := r.kvStoreBuilder.NewFS(r.settings.StateDir)
if err != nil {
return nil, err
}
var proxyURL *url.URL
if r.settings.Proxy != "" {
var err error
proxyURL, err = url.Parse(r.settings.Proxy)
if err != nil {
return nil, err
}
}
config := engine.SessionConfig{
KVStore: kvstore,
Logger: logger,
ProxyURL: proxyURL,
SoftwareName: r.settings.Options.SoftwareName,
SoftwareVersion: r.settings.Options.SoftwareVersion,
TempDir: r.settings.TempDir,
TunnelDir: r.settings.TunnelDir,
}
if r.settings.Options.ProbeServicesBaseURL != "" {
config.AvailableProbeServices = []model.Service{{
Type: "https",
Address: r.settings.Options.ProbeServicesBaseURL,
}}
}
return r.sessionBuilder.NewSession(ctx, config)
}
func (r *runnerForTask) contextForExperiment(
ctx context.Context, builder taskExperimentBuilder,
) context.Context {
if builder.Interruptible() {
return ctx
}
return context.Background()
}
type runnerCallbacks struct {
emitter taskEmitter
}
func (cb *runnerCallbacks) OnProgress(percentage float64, message string) {
cb.emitter.Emit(eventTypeStatusProgress, eventStatusProgress{
Percentage: 0.4 + (percentage * 0.6), // open report is 40%
Message: message,
})
}
// Run runs the runner until completion. The context argument controls
// when to stop when processing multiple inputs, as well as when to stop
// experiments explicitly marked as interruptible.
func (r *runnerForTask) Run(ctx context.Context) {
var logger model.Logger = newTaskLogger(r.emitter, r.settings.LogLevel)
r.emitter.Emit(eventTypeStatusQueued, eventEmpty{})
if r.hasUnsupportedSettings() {
// event failureStartup already emitted
return
}
r.emitter.Emit(eventTypeStatusStarted, eventEmpty{})
sess, err := r.newsession(ctx, logger)
if err != nil {
r.emitter.EmitFailureStartup(err.Error())
return
}
endEvent := new(eventStatusEnd)
defer func() {
sess.Close()
r.emitter.Emit(eventTypeStatusEnd, endEvent)
}()
builder, err := sess.NewExperimentBuilderByName(r.settings.Name)
if err != nil {
r.emitter.EmitFailureStartup(err.Error())
return
}
logger.Info("Looking up OONI backends... please, be patient")
if err := sess.MaybeLookupBackendsContext(ctx); err != nil {
r.emitter.EmitFailureStartup(err.Error())
return
}
r.emitter.EmitStatusProgress(0.1, "contacted bouncer")
logger.Info("Looking up your location... please, be patient")
if err := sess.MaybeLookupLocationContext(ctx); err != nil {
r.emitter.EmitFailureGeneric(eventTypeFailureIPLookup, err.Error())
r.emitter.EmitFailureGeneric(eventTypeFailureASNLookup, err.Error())
r.emitter.EmitFailureGeneric(eventTypeFailureCCLookup, err.Error())
r.emitter.EmitFailureGeneric(eventTypeFailureResolverLookup, err.Error())
return
}
r.emitter.EmitStatusProgress(0.2, "geoip lookup")
r.emitter.EmitStatusProgress(0.3, "resolver lookup")
r.emitter.Emit(eventTypeStatusGeoIPLookup, eventStatusGeoIPLookup{
ProbeIP: sess.ProbeIP(),
ProbeASN: sess.ProbeASNString(),
ProbeCC: sess.ProbeCC(),
ProbeNetworkName: sess.ProbeNetworkName(),
})
r.emitter.Emit(eventTypeStatusResolverLookup, eventStatusResolverLookup{
ResolverASN: sess.ResolverASNString(),
ResolverIP: sess.ResolverIP(),
ResolverNetworkName: sess.ResolverNetworkName(),
})
builder.SetCallbacks(&runnerCallbacks{emitter: r.emitter})
if len(r.settings.Inputs) <= 0 {
switch builder.InputPolicy() {
case engine.InputOrQueryBackend, engine.InputStrictlyRequired:
r.emitter.EmitFailureStartup("no input provided")
return
}
r.settings.Inputs = append(r.settings.Inputs, "")
}
experiment := builder.NewExperimentInstance()
defer func() {
endEvent.DownloadedKB = experiment.KibiBytesReceived()
endEvent.UploadedKB = experiment.KibiBytesSent()
}()
if !r.settings.Options.NoCollector {
logger.Info("Opening report... please, be patient")
if err := experiment.OpenReportContext(ctx); err != nil {
r.emitter.EmitFailureGeneric(eventTypeFailureReportCreate, err.Error())
return
}
r.emitter.EmitStatusProgress(0.4, "open report")
r.emitter.Emit(eventTypeStatusReportCreate, eventStatusReportGeneric{
ReportID: experiment.ReportID(),
})
}
// This deviates a little bit from measurement-kit, for which
// a zero timeout is actually valid. Since it does not make much
// sense, here we're changing the behaviour.
//
// See https://github.com/measurement-kit/measurement-kit/issues/1922
if r.settings.Options.MaxRuntime > 0 {
// We want to honour max_runtime only when we're running an
// experiment that clearly wants specific input. We could refine
// this policy in the future, but for now this covers in a
// reasonable way web connectivity, so we should be ok.
switch builder.InputPolicy() {
case engine.InputOrQueryBackend, engine.InputStrictlyRequired:
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(
ctx, time.Duration(r.settings.Options.MaxRuntime)*time.Second,
)
defer cancel()
}
}
inputCount := len(r.settings.Inputs)
start := time.Now()
inflatedMaxRuntime := r.settings.Options.MaxRuntime + r.settings.Options.MaxRuntime/10
eta := start.Add(time.Duration(inflatedMaxRuntime) * time.Second)
for idx, input := range r.settings.Inputs {
if ctx.Err() != nil {
break
}
logger.Infof("Starting measurement with index %d", idx)
r.emitter.Emit(eventTypeStatusMeasurementStart, eventMeasurementGeneric{
Idx: int64(idx),
Input: input,
})
if input != "" && inputCount > 0 {
var percentage float64
if r.settings.Options.MaxRuntime > 0 {
now := time.Now()
percentage = (now.Sub(start).Seconds()/eta.Sub(start).Seconds())*0.6 + 0.4
} else {
percentage = (float64(idx)/float64(inputCount))*0.6 + 0.4
}
r.emitter.EmitStatusProgress(percentage, fmt.Sprintf(
"processing %s", input,
))
}
m, err := experiment.MeasureWithContext(
r.contextForExperiment(ctx, builder),
input,
)
if builder.Interruptible() && ctx.Err() != nil {
// We want to stop here only if interruptible otherwise we want to
// submit measurement and stop at beginning of next iteration
break
}
m.AddAnnotations(r.settings.Annotations)
if err != nil {
r.emitter.Emit(eventTypeFailureMeasurement, eventMeasurementGeneric{
Failure: err.Error(),
Idx: int64(idx),
Input: input,
})
// Historical note: here we used to fallthrough but, since we have
// implemented async measurements, the case where there is an error
// and we also have a valid measurement cant't happen anymore. So,
// now the only valid strategy here is to continue.
continue
}
data, err := json.Marshal(m)
runtimex.PanicOnError(err, "measurement.MarshalJSON failed")
r.emitter.Emit(eventTypeMeasurement, eventMeasurementGeneric{
Idx: int64(idx),
Input: input,
JSONStr: string(data),
})
if !r.settings.Options.NoCollector {
logger.Info("Submitting measurement... please, be patient")
err := experiment.SubmitAndUpdateMeasurementContext(ctx, m)
r.emitter.Emit(measurementSubmissionEventName(err), eventMeasurementGeneric{
Idx: int64(idx),
Input: input,
JSONStr: string(data),
Failure: measurementSubmissionFailure(err),
})
}
r.emitter.Emit(eventTypeStatusMeasurementDone, eventMeasurementGeneric{
Idx: int64(idx),
Input: input,
})
}
}
func measurementSubmissionEventName(err error) string {
if err != nil {
return eventTypeFailureMeasurementSubmission
}
return eventTypeStatusMeasurementSubmission
}
func measurementSubmissionFailure(err error) string {
if err != nil {
return err.Error()
}
return ""
}