2022-05-31 21:53:01 +02:00
|
|
|
package tracex
|
2021-02-02 12:05:47 +01:00
|
|
|
|
|
|
|
import (
|
2021-06-15 11:57:40 +02:00
|
|
|
"context"
|
2021-02-02 12:05:47 +01:00
|
|
|
"errors"
|
2021-06-15 14:01:45 +02:00
|
|
|
"io"
|
2022-05-31 08:11:07 +02:00
|
|
|
"net"
|
2021-02-02 12:05:47 +01:00
|
|
|
"net/http"
|
fix(netxlite): do not mutate outgoing requests (#508)
I have recently seen a data race related our way of
mutating the outgoing request to set the host header.
Unfortunately, I've lost track of the race output,
because I rebooted my Linux box before saving it.
Though, after inspecting why and and where we're mutating
outgoing requets, I've found that:
1. we add the host header when logging to have it logged,
which is not a big deal since we already emit the URL
rather than just the URL path when logging a request, and
so we can safely zap this piece of code;
2. as a result, in measurements we may omit the host header
but again this is pretty much obvious from the URL itself
and so it should not be very important (nonetheless,
avoid surprises and keep the existing behavior);
3. when the User-Agent header is not set, we default to
a `miniooni/0.1.0-dev` user agent, which is probably not
very useful anyway, so we can actually remove it.
Part of https://github.com/ooni/probe/issues/1733 (this diff
has been extracted from https://github.com/ooni/probe-cli/pull/506).
2021-09-27 13:35:47 +02:00
|
|
|
"net/url"
|
2021-02-02 12:05:47 +01:00
|
|
|
"strings"
|
|
|
|
"testing"
|
|
|
|
"time"
|
|
|
|
|
feature: merge measurex and netx archival layer (1/N) (#663)
This diff introduces a new package called `./internal/archival`. This package collects data from `./internal/model` network interfaces (e.g., `Dialer`, `QUICDialer`, `HTTPTransport`), saves such data into an internal tabular data format suitable for on-line processing and analysis, and allows exporting data into the OONI data format.
The code for collecting and the internal tabular data formats are adapted from `measurex`. The code for formatting and exporting OONI data-format-compliant structures is adapted from `netx/archival`.
My original objective was to _also_ (1) fully replace `netx/archival` with this package and (2) adapt `measurex` to use this package rather than its own code. Both operations seem easily feasible because: (a) this code is `measurex` code without extensions that are `measurex` related, which will need to be added back as part of the process; (b) the API provided by this code allows for trivially converting from using `netx/archival` to using this code.
Yet, both changes should not be taken lightly. After implementing them, there's need to spend some time doing QA and ensuring all nettests work as intended. However, I am planning a release in the next two weeks, and this QA task is likely going to defer the release. For this reason, I have chosen to commit the work done so far into the tree and defer the second part of this refactoring for a later moment in time. (This explains why the title mentions "1/N").
On a more high-level perspective, it would also be beneficial, I guess, to explain _why_ I am doing these changes. There are two intertwined reasons. The first reason is that `netx/archival` has shortcomings deriving from its original https://github.com/ooni/netx legacy. The most relevant shortcoming is that it saves all kind of data into the same tabular structure named `Event`. This design choice is unfortunate because it does not allow one to apply data-type specific logic when processing the results. In turn, this choice results in complex processing code. Therefore, I believe that replacing the code with event-specific data structures is clearly an improvement in terms of code maintainability and would quite likely lead us to more confidently change and evolve the codebase.
The second reason why I would like to move forward these changes is to unify the codepaths used for measuring. At this point in time, we basically have two codepaths: `./internal/engine/netx` and `./internal/measurex`. They both have pros and cons and I don't think we want to rewrite whole experiments using `netx`. Rather, what we probably want is to gradually merge these two codepaths such that `netx` is a set of abstractions on top of `measurex` (which is more low-level and has a more-easily-testable design). Because saving events and generating an archival data format out of them consists of at least 50% of the complexity of both `netx` and `measurex`, it seems reasonable to unify this archival-related part of the two codebases as the first step.
At the highest level of abstraction, these changes are part of the train of changes which will eventually lead us to bless `websteps` as a first class citizen in OONI land. Because `websteps` requires different underlying primitives, I chose to develop these primitives from scratch rather than wrestling with `netx`, which used another model. The model used by `websteps` is that we perform each operation in isolation and immediately we save the results, while `netx` creates whole data structures and collects all the events happening via tracing. We believe the model used by `websteps` to be better because it does not require your code to figure out everything that happened after the measurement, which is a source of subtle bugs in the current implementation. So, when I started implementing websteps I extracted the bits of `netx` that could also be beneficial to `websteps` into a separate library, thus `netxlite` was born.
The reference issue describing merging the archival of `netx` and `measurex` is https://github.com/ooni/probe/issues/1957. As of this writing the issue still references the original plan, which I could not complete by the end of this Sprint, so I am going to adapt the text of the issue to only refer to what was done in here next. Of course, I also need follow-up issues.
2022-01-14 12:13:10 +01:00
|
|
|
"github.com/ooni/probe-cli/v3/internal/model"
|
2021-09-28 12:42:01 +02:00
|
|
|
"github.com/ooni/probe-cli/v3/internal/netxlite"
|
2021-02-02 12:05:47 +01:00
|
|
|
)
|
|
|
|
|
|
|
|
func TestSaverMetadataSuccess(t *testing.T) {
|
|
|
|
if testing.Short() {
|
|
|
|
t.Skip("skip test in short mode")
|
|
|
|
}
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := &Saver{}
|
|
|
|
txp := SaverMetadataHTTPTransport{
|
feature: merge measurex and netx archival layer (1/N) (#663)
This diff introduces a new package called `./internal/archival`. This package collects data from `./internal/model` network interfaces (e.g., `Dialer`, `QUICDialer`, `HTTPTransport`), saves such data into an internal tabular data format suitable for on-line processing and analysis, and allows exporting data into the OONI data format.
The code for collecting and the internal tabular data formats are adapted from `measurex`. The code for formatting and exporting OONI data-format-compliant structures is adapted from `netx/archival`.
My original objective was to _also_ (1) fully replace `netx/archival` with this package and (2) adapt `measurex` to use this package rather than its own code. Both operations seem easily feasible because: (a) this code is `measurex` code without extensions that are `measurex` related, which will need to be added back as part of the process; (b) the API provided by this code allows for trivially converting from using `netx/archival` to using this code.
Yet, both changes should not be taken lightly. After implementing them, there's need to spend some time doing QA and ensuring all nettests work as intended. However, I am planning a release in the next two weeks, and this QA task is likely going to defer the release. For this reason, I have chosen to commit the work done so far into the tree and defer the second part of this refactoring for a later moment in time. (This explains why the title mentions "1/N").
On a more high-level perspective, it would also be beneficial, I guess, to explain _why_ I am doing these changes. There are two intertwined reasons. The first reason is that `netx/archival` has shortcomings deriving from its original https://github.com/ooni/netx legacy. The most relevant shortcoming is that it saves all kind of data into the same tabular structure named `Event`. This design choice is unfortunate because it does not allow one to apply data-type specific logic when processing the results. In turn, this choice results in complex processing code. Therefore, I believe that replacing the code with event-specific data structures is clearly an improvement in terms of code maintainability and would quite likely lead us to more confidently change and evolve the codebase.
The second reason why I would like to move forward these changes is to unify the codepaths used for measuring. At this point in time, we basically have two codepaths: `./internal/engine/netx` and `./internal/measurex`. They both have pros and cons and I don't think we want to rewrite whole experiments using `netx`. Rather, what we probably want is to gradually merge these two codepaths such that `netx` is a set of abstractions on top of `measurex` (which is more low-level and has a more-easily-testable design). Because saving events and generating an archival data format out of them consists of at least 50% of the complexity of both `netx` and `measurex`, it seems reasonable to unify this archival-related part of the two codebases as the first step.
At the highest level of abstraction, these changes are part of the train of changes which will eventually lead us to bless `websteps` as a first class citizen in OONI land. Because `websteps` requires different underlying primitives, I chose to develop these primitives from scratch rather than wrestling with `netx`, which used another model. The model used by `websteps` is that we perform each operation in isolation and immediately we save the results, while `netx` creates whole data structures and collects all the events happening via tracing. We believe the model used by `websteps` to be better because it does not require your code to figure out everything that happened after the measurement, which is a source of subtle bugs in the current implementation. So, when I started implementing websteps I extracted the bits of `netx` that could also be beneficial to `websteps` into a separate library, thus `netxlite` was born.
The reference issue describing merging the archival of `netx` and `measurex` is https://github.com/ooni/probe/issues/1957. As of this writing the issue still references the original plan, which I could not complete by the end of this Sprint, so I am going to adapt the text of the issue to only refer to what was done in here next. Of course, I also need follow-up issues.
2022-01-14 12:13:10 +01:00
|
|
|
HTTPTransport: netxlite.NewHTTPTransportStdlib(model.DiscardLogger),
|
2022-01-07 18:33:37 +01:00
|
|
|
Saver: saver,
|
2021-02-02 12:05:47 +01:00
|
|
|
}
|
|
|
|
req, err := http.NewRequest("GET", "https://www.google.com", nil)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
req.Header.Add("User-Agent", "miniooni/0.1.0-dev")
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp == nil {
|
|
|
|
t.Fatal("expected non nil response here")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 2 {
|
|
|
|
t.Fatal("expected two events")
|
|
|
|
}
|
|
|
|
//
|
|
|
|
if ev[0].HTTPMethod != "GET" {
|
|
|
|
t.Fatal("unexpected Method")
|
|
|
|
}
|
|
|
|
if len(ev[0].HTTPHeaders) <= 0 {
|
|
|
|
t.Fatal("unexpected Headers")
|
|
|
|
}
|
|
|
|
if ev[0].HTTPURL != "https://www.google.com" {
|
|
|
|
t.Fatal("unexpected URL")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_request_metadata" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[0].Time.Before(time.Now()) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
//
|
|
|
|
if ev[1].HTTPStatusCode != 200 {
|
|
|
|
t.Fatal("unexpected StatusCode")
|
|
|
|
}
|
|
|
|
if len(ev[1].HTTPHeaders) <= 0 {
|
|
|
|
t.Fatal("unexpected Headers")
|
|
|
|
}
|
|
|
|
if ev[1].Name != "http_response_metadata" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[1].Time.After(ev[0].Time) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverMetadataFailure(t *testing.T) {
|
|
|
|
expected := errors.New("mocked error")
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := &Saver{}
|
|
|
|
txp := SaverMetadataHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Err: expected,
|
|
|
|
},
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
req, err := http.NewRequest("GET", "http://www.google.com", nil)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
req.Header.Add("User-Agent", "miniooni/0.1.0-dev")
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if !errors.Is(err, expected) {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp != nil {
|
|
|
|
t.Fatal("expected nil response here")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 1 {
|
|
|
|
t.Fatal("expected one event")
|
|
|
|
}
|
|
|
|
if ev[0].HTTPMethod != "GET" {
|
|
|
|
t.Fatal("unexpected Method")
|
|
|
|
}
|
|
|
|
if len(ev[0].HTTPHeaders) <= 0 {
|
|
|
|
t.Fatal("unexpected Headers")
|
|
|
|
}
|
|
|
|
if ev[0].HTTPURL != "http://www.google.com" {
|
|
|
|
t.Fatal("unexpected URL")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_request_metadata" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[0].Time.Before(time.Now()) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverTransactionSuccess(t *testing.T) {
|
|
|
|
if testing.Short() {
|
|
|
|
t.Skip("skip test in short mode")
|
|
|
|
}
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := &Saver{}
|
|
|
|
txp := SaverTransactionHTTPTransport{
|
feature: merge measurex and netx archival layer (1/N) (#663)
This diff introduces a new package called `./internal/archival`. This package collects data from `./internal/model` network interfaces (e.g., `Dialer`, `QUICDialer`, `HTTPTransport`), saves such data into an internal tabular data format suitable for on-line processing and analysis, and allows exporting data into the OONI data format.
The code for collecting and the internal tabular data formats are adapted from `measurex`. The code for formatting and exporting OONI data-format-compliant structures is adapted from `netx/archival`.
My original objective was to _also_ (1) fully replace `netx/archival` with this package and (2) adapt `measurex` to use this package rather than its own code. Both operations seem easily feasible because: (a) this code is `measurex` code without extensions that are `measurex` related, which will need to be added back as part of the process; (b) the API provided by this code allows for trivially converting from using `netx/archival` to using this code.
Yet, both changes should not be taken lightly. After implementing them, there's need to spend some time doing QA and ensuring all nettests work as intended. However, I am planning a release in the next two weeks, and this QA task is likely going to defer the release. For this reason, I have chosen to commit the work done so far into the tree and defer the second part of this refactoring for a later moment in time. (This explains why the title mentions "1/N").
On a more high-level perspective, it would also be beneficial, I guess, to explain _why_ I am doing these changes. There are two intertwined reasons. The first reason is that `netx/archival` has shortcomings deriving from its original https://github.com/ooni/netx legacy. The most relevant shortcoming is that it saves all kind of data into the same tabular structure named `Event`. This design choice is unfortunate because it does not allow one to apply data-type specific logic when processing the results. In turn, this choice results in complex processing code. Therefore, I believe that replacing the code with event-specific data structures is clearly an improvement in terms of code maintainability and would quite likely lead us to more confidently change and evolve the codebase.
The second reason why I would like to move forward these changes is to unify the codepaths used for measuring. At this point in time, we basically have two codepaths: `./internal/engine/netx` and `./internal/measurex`. They both have pros and cons and I don't think we want to rewrite whole experiments using `netx`. Rather, what we probably want is to gradually merge these two codepaths such that `netx` is a set of abstractions on top of `measurex` (which is more low-level and has a more-easily-testable design). Because saving events and generating an archival data format out of them consists of at least 50% of the complexity of both `netx` and `measurex`, it seems reasonable to unify this archival-related part of the two codebases as the first step.
At the highest level of abstraction, these changes are part of the train of changes which will eventually lead us to bless `websteps` as a first class citizen in OONI land. Because `websteps` requires different underlying primitives, I chose to develop these primitives from scratch rather than wrestling with `netx`, which used another model. The model used by `websteps` is that we perform each operation in isolation and immediately we save the results, while `netx` creates whole data structures and collects all the events happening via tracing. We believe the model used by `websteps` to be better because it does not require your code to figure out everything that happened after the measurement, which is a source of subtle bugs in the current implementation. So, when I started implementing websteps I extracted the bits of `netx` that could also be beneficial to `websteps` into a separate library, thus `netxlite` was born.
The reference issue describing merging the archival of `netx` and `measurex` is https://github.com/ooni/probe/issues/1957. As of this writing the issue still references the original plan, which I could not complete by the end of this Sprint, so I am going to adapt the text of the issue to only refer to what was done in here next. Of course, I also need follow-up issues.
2022-01-14 12:13:10 +01:00
|
|
|
HTTPTransport: netxlite.NewHTTPTransportStdlib(model.DiscardLogger),
|
2022-01-07 18:33:37 +01:00
|
|
|
Saver: saver,
|
2021-02-02 12:05:47 +01:00
|
|
|
}
|
|
|
|
req, err := http.NewRequest("GET", "https://www.google.com", nil)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp == nil {
|
|
|
|
t.Fatal("expected non nil response here")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 2 {
|
|
|
|
t.Fatal("expected two events")
|
|
|
|
}
|
|
|
|
//
|
|
|
|
if ev[0].Name != "http_transaction_start" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[0].Time.Before(time.Now()) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
//
|
|
|
|
if ev[1].Err != nil {
|
|
|
|
t.Fatal("unexpected Err")
|
|
|
|
}
|
|
|
|
if ev[1].Name != "http_transaction_done" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[1].Time.After(ev[0].Time) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverTransactionFailure(t *testing.T) {
|
|
|
|
expected := errors.New("mocked error")
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := &Saver{}
|
|
|
|
txp := SaverTransactionHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Err: expected,
|
|
|
|
},
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
req, err := http.NewRequest("GET", "http://www.google.com", nil)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if !errors.Is(err, expected) {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp != nil {
|
|
|
|
t.Fatal("expected nil response here")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 2 {
|
|
|
|
t.Fatal("expected two events")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_transaction_start" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !ev[0].Time.Before(time.Now()) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
if ev[1].Name != "http_transaction_done" {
|
|
|
|
t.Fatal("unexpected Name")
|
|
|
|
}
|
|
|
|
if !errors.Is(ev[1].Err, expected) {
|
|
|
|
t.Fatal("unexpected Err")
|
|
|
|
}
|
|
|
|
if !ev[1].Time.After(ev[0].Time) {
|
|
|
|
t.Fatal("unexpected Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverBodySuccess(t *testing.T) {
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := new(Saver)
|
|
|
|
txp := SaverBodyHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Func: func(req *http.Request) (*http.Response, error) {
|
2021-09-28 12:42:01 +02:00
|
|
|
data, err := netxlite.ReadAllContext(context.Background(), req.Body)
|
2021-02-02 12:05:47 +01:00
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
if string(data) != "deadbeef" {
|
|
|
|
t.Fatal("invalid data")
|
|
|
|
}
|
|
|
|
return &http.Response{
|
|
|
|
StatusCode: 501,
|
2021-06-15 14:01:45 +02:00
|
|
|
Body: io.NopCloser(strings.NewReader("abad1dea")),
|
2021-02-02 12:05:47 +01:00
|
|
|
}, nil
|
|
|
|
},
|
|
|
|
},
|
|
|
|
SnapshotSize: 4,
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
body := strings.NewReader("deadbeef")
|
|
|
|
req, err := http.NewRequest("POST", "http://x.org/y", body)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
if resp.StatusCode != 501 {
|
|
|
|
t.Fatal("unexpected status code")
|
|
|
|
}
|
|
|
|
defer resp.Body.Close()
|
2021-09-28 12:42:01 +02:00
|
|
|
data, err := netxlite.ReadAllContext(context.Background(), resp.Body)
|
2021-02-02 12:05:47 +01:00
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
if string(data) != "abad1dea" {
|
|
|
|
t.Fatal("unexpected body")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 2 {
|
|
|
|
t.Fatal("unexpected number of events")
|
|
|
|
}
|
|
|
|
if string(ev[0].Data) != "dead" {
|
|
|
|
t.Fatal("invalid Data")
|
|
|
|
}
|
|
|
|
if ev[0].DataIsTruncated != true {
|
|
|
|
t.Fatal("invalid DataIsTruncated")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_request_body_snapshot" {
|
|
|
|
t.Fatal("invalid Name")
|
|
|
|
}
|
|
|
|
if ev[0].Time.After(time.Now()) {
|
|
|
|
t.Fatal("invalid Time")
|
|
|
|
}
|
|
|
|
if string(ev[1].Data) != "abad" {
|
|
|
|
t.Fatal("invalid Data")
|
|
|
|
}
|
|
|
|
if ev[1].DataIsTruncated != true {
|
|
|
|
t.Fatal("invalid DataIsTruncated")
|
|
|
|
}
|
|
|
|
if ev[1].Name != "http_response_body_snapshot" {
|
|
|
|
t.Fatal("invalid Name")
|
|
|
|
}
|
|
|
|
if ev[1].Time.Before(ev[0].Time) {
|
|
|
|
t.Fatal("invalid Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverBodyRequestReadError(t *testing.T) {
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := new(Saver)
|
|
|
|
txp := SaverBodyHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Func: func(req *http.Request) (*http.Response, error) {
|
|
|
|
panic("should not be called")
|
|
|
|
},
|
|
|
|
},
|
|
|
|
SnapshotSize: 4,
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
expected := errors.New("mocked error")
|
2022-05-31 08:11:07 +02:00
|
|
|
body := FakeBody{Err: expected}
|
2021-02-02 12:05:47 +01:00
|
|
|
req, err := http.NewRequest("POST", "http://x.org/y", body)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if !errors.Is(err, expected) {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp != nil {
|
|
|
|
t.Fatal("expected nil response")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 0 {
|
|
|
|
t.Fatal("unexpected number of events")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverBodyRoundTripError(t *testing.T) {
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := new(Saver)
|
2021-02-02 12:05:47 +01:00
|
|
|
expected := errors.New("mocked error")
|
2022-05-31 21:53:01 +02:00
|
|
|
txp := SaverBodyHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Err: expected,
|
|
|
|
},
|
|
|
|
SnapshotSize: 4,
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
body := strings.NewReader("deadbeef")
|
|
|
|
req, err := http.NewRequest("POST", "http://x.org/y", body)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if !errors.Is(err, expected) {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp != nil {
|
|
|
|
t.Fatal("expected nil response")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 1 {
|
|
|
|
t.Fatal("unexpected number of events")
|
|
|
|
}
|
|
|
|
if string(ev[0].Data) != "dead" {
|
|
|
|
t.Fatal("invalid Data")
|
|
|
|
}
|
|
|
|
if ev[0].DataIsTruncated != true {
|
|
|
|
t.Fatal("invalid DataIsTruncated")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_request_body_snapshot" {
|
|
|
|
t.Fatal("invalid Name")
|
|
|
|
}
|
|
|
|
if ev[0].Time.After(time.Now()) {
|
|
|
|
t.Fatal("invalid Time")
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestSaverBodyResponseReadError(t *testing.T) {
|
2022-05-31 21:53:01 +02:00
|
|
|
saver := new(Saver)
|
2021-02-02 12:05:47 +01:00
|
|
|
expected := errors.New("mocked error")
|
2022-05-31 21:53:01 +02:00
|
|
|
txp := SaverBodyHTTPTransport{
|
2022-05-31 08:11:07 +02:00
|
|
|
HTTPTransport: FakeTransport{
|
2021-02-02 12:05:47 +01:00
|
|
|
Func: func(req *http.Request) (*http.Response, error) {
|
|
|
|
return &http.Response{
|
|
|
|
StatusCode: 200,
|
2022-05-31 08:11:07 +02:00
|
|
|
Body: FakeBody{
|
2021-02-02 12:05:47 +01:00
|
|
|
Err: expected,
|
|
|
|
},
|
|
|
|
}, nil
|
|
|
|
},
|
|
|
|
},
|
|
|
|
SnapshotSize: 4,
|
|
|
|
Saver: saver,
|
|
|
|
}
|
|
|
|
body := strings.NewReader("deadbeef")
|
|
|
|
req, err := http.NewRequest("POST", "http://x.org/y", body)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
resp, err := txp.RoundTrip(req)
|
|
|
|
if !errors.Is(err, expected) {
|
|
|
|
t.Fatal("not the error we expected")
|
|
|
|
}
|
|
|
|
if resp != nil {
|
|
|
|
t.Fatal("expected nil response")
|
|
|
|
}
|
|
|
|
ev := saver.Read()
|
|
|
|
if len(ev) != 1 {
|
|
|
|
t.Fatal("unexpected number of events")
|
|
|
|
}
|
|
|
|
if string(ev[0].Data) != "dead" {
|
|
|
|
t.Fatal("invalid Data")
|
|
|
|
}
|
|
|
|
if ev[0].DataIsTruncated != true {
|
|
|
|
t.Fatal("invalid DataIsTruncated")
|
|
|
|
}
|
|
|
|
if ev[0].Name != "http_request_body_snapshot" {
|
|
|
|
t.Fatal("invalid Name")
|
|
|
|
}
|
|
|
|
if ev[0].Time.After(time.Now()) {
|
|
|
|
t.Fatal("invalid Time")
|
|
|
|
}
|
|
|
|
}
|
fix(netxlite): do not mutate outgoing requests (#508)
I have recently seen a data race related our way of
mutating the outgoing request to set the host header.
Unfortunately, I've lost track of the race output,
because I rebooted my Linux box before saving it.
Though, after inspecting why and and where we're mutating
outgoing requets, I've found that:
1. we add the host header when logging to have it logged,
which is not a big deal since we already emit the URL
rather than just the URL path when logging a request, and
so we can safely zap this piece of code;
2. as a result, in measurements we may omit the host header
but again this is pretty much obvious from the URL itself
and so it should not be very important (nonetheless,
avoid surprises and keep the existing behavior);
3. when the User-Agent header is not set, we default to
a `miniooni/0.1.0-dev` user agent, which is probably not
very useful anyway, so we can actually remove it.
Part of https://github.com/ooni/probe/issues/1733 (this diff
has been extracted from https://github.com/ooni/probe-cli/pull/506).
2021-09-27 13:35:47 +02:00
|
|
|
|
|
|
|
func TestCloneHeaders(t *testing.T) {
|
|
|
|
t.Run("with req.Host set", func(t *testing.T) {
|
|
|
|
req := &http.Request{
|
|
|
|
Host: "www.example.com",
|
|
|
|
URL: &url.URL{
|
|
|
|
Host: "www.kernel.org",
|
|
|
|
},
|
|
|
|
Header: http.Header{},
|
|
|
|
}
|
2022-06-01 07:44:54 +02:00
|
|
|
header := httpCloneHeaders(req)
|
fix(netxlite): do not mutate outgoing requests (#508)
I have recently seen a data race related our way of
mutating the outgoing request to set the host header.
Unfortunately, I've lost track of the race output,
because I rebooted my Linux box before saving it.
Though, after inspecting why and and where we're mutating
outgoing requets, I've found that:
1. we add the host header when logging to have it logged,
which is not a big deal since we already emit the URL
rather than just the URL path when logging a request, and
so we can safely zap this piece of code;
2. as a result, in measurements we may omit the host header
but again this is pretty much obvious from the URL itself
and so it should not be very important (nonetheless,
avoid surprises and keep the existing behavior);
3. when the User-Agent header is not set, we default to
a `miniooni/0.1.0-dev` user agent, which is probably not
very useful anyway, so we can actually remove it.
Part of https://github.com/ooni/probe/issues/1733 (this diff
has been extracted from https://github.com/ooni/probe-cli/pull/506).
2021-09-27 13:35:47 +02:00
|
|
|
if header.Get("Host") != "www.example.com" {
|
|
|
|
t.Fatal("did not set Host header correctly")
|
|
|
|
}
|
|
|
|
})
|
|
|
|
|
|
|
|
t.Run("with only req.URL.Host set", func(t *testing.T) {
|
|
|
|
req := &http.Request{
|
|
|
|
Host: "",
|
|
|
|
URL: &url.URL{
|
|
|
|
Host: "www.kernel.org",
|
|
|
|
},
|
|
|
|
Header: http.Header{},
|
|
|
|
}
|
2022-06-01 07:44:54 +02:00
|
|
|
header := httpCloneHeaders(req)
|
fix(netxlite): do not mutate outgoing requests (#508)
I have recently seen a data race related our way of
mutating the outgoing request to set the host header.
Unfortunately, I've lost track of the race output,
because I rebooted my Linux box before saving it.
Though, after inspecting why and and where we're mutating
outgoing requets, I've found that:
1. we add the host header when logging to have it logged,
which is not a big deal since we already emit the URL
rather than just the URL path when logging a request, and
so we can safely zap this piece of code;
2. as a result, in measurements we may omit the host header
but again this is pretty much obvious from the URL itself
and so it should not be very important (nonetheless,
avoid surprises and keep the existing behavior);
3. when the User-Agent header is not set, we default to
a `miniooni/0.1.0-dev` user agent, which is probably not
very useful anyway, so we can actually remove it.
Part of https://github.com/ooni/probe/issues/1733 (this diff
has been extracted from https://github.com/ooni/probe-cli/pull/506).
2021-09-27 13:35:47 +02:00
|
|
|
if header.Get("Host") != "www.kernel.org" {
|
|
|
|
t.Fatal("did not set Host header correctly")
|
|
|
|
}
|
|
|
|
})
|
|
|
|
}
|
2022-05-31 08:11:07 +02:00
|
|
|
|
|
|
|
type FakeDialer struct {
|
|
|
|
Conn net.Conn
|
|
|
|
Err error
|
|
|
|
}
|
|
|
|
|
|
|
|
func (d FakeDialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
|
|
|
|
time.Sleep(10 * time.Microsecond)
|
|
|
|
return d.Conn, d.Err
|
|
|
|
}
|
|
|
|
|
|
|
|
type FakeTransport struct {
|
|
|
|
Name string
|
|
|
|
Err error
|
|
|
|
Func func(*http.Request) (*http.Response, error)
|
|
|
|
Resp *http.Response
|
|
|
|
}
|
|
|
|
|
|
|
|
func (txp FakeTransport) Network() string {
|
|
|
|
return txp.Name
|
|
|
|
}
|
|
|
|
|
|
|
|
func (txp FakeTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
|
|
|
time.Sleep(10 * time.Microsecond)
|
|
|
|
if txp.Func != nil {
|
|
|
|
return txp.Func(req)
|
|
|
|
}
|
|
|
|
if req.Body != nil {
|
|
|
|
netxlite.ReadAllContext(req.Context(), req.Body)
|
|
|
|
req.Body.Close()
|
|
|
|
}
|
|
|
|
if txp.Err != nil {
|
|
|
|
return nil, txp.Err
|
|
|
|
}
|
|
|
|
txp.Resp.Request = req // non thread safe but it doesn't matter
|
|
|
|
return txp.Resp, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (txp FakeTransport) CloseIdleConnections() {}
|
|
|
|
|
|
|
|
type FakeBody struct {
|
|
|
|
Err error
|
|
|
|
}
|
|
|
|
|
|
|
|
func (fb FakeBody) Read(p []byte) (int, error) {
|
|
|
|
time.Sleep(10 * time.Microsecond)
|
|
|
|
return 0, fb.Err
|
|
|
|
}
|
|
|
|
|
|
|
|
func (fb FakeBody) Close() error {
|
|
|
|
return nil
|
|
|
|
}
|