Writing HTTP servers and clients
Vert.x allows you to easily write non blocking HTTP clients and servers.
Vert.x supports the HTTP/1.0, HTTP/1.1 and HTTP/2 protocols.
The base API for HTTP is the same for HTTP/1.x and HTTP/2, specific API features are available for dealing with the HTTP/2 protocol.
Creating an HTTP Server
The simplest way to create an HTTP server, using all default options is as follows:
server = vertx.create_http_server()
Configuring an HTTP server
If you don’t want the default, a server can be configured by passing in a HttpServerOptions
instance when creating it:
options = {
'maxWebsocketFrameSize' => 1000000
}
server = vertx.create_http_server(options)
Configuring an HTTP/2 server
Vert.x supports HTTP/2 over TLS h2
and over TCP h2c
.
-
h2
identifies the HTTP/2 protocol when used over TLS negotiated by Application-Layer Protocol Negotiation (ALPN) -
h2c
identifies the HTTP/2 protocol when using in clear text over TCP, such connections are established either with an HTTP/1.1 upgraded request or directly
To handle h2
requests, TLS must be enabled along with useAlpn
:
options = {
'useAlpn' => true,
'ssl' => true,
'keyStoreOptions' => {
'path' => "/path/to/my/keystore"
}
}
server = vertx.create_http_server(options)
ALPN is a TLS extension that negotiates the protocol before the client and the server start to exchange data.
Clients that don’t support ALPN will still be able to do a classic SSL handshake.
ALPN will usually agree on the h2
protocol, although http/1.1
can be used if the server or the client decides
so.
To handle h2c
requests, TLS must be disabled, the server will upgrade to HTTP/2 any request HTTP/1.1 that wants to
upgrade to HTTP/2. It will also accept a direct h2c
connection beginning with the PRI * HTTP/2.0\r\nSM\r\n
preface.
Warning
|
most browsers won’t support h2c , so for serving web sites you should use h2 and not h2c .
|
When a server accepts an HTTP/2 connection, it sends to the client its initial settings
.
The settings define how the client can use the connection, the default initial settings for a server are:
-
getMaxConcurrentStreams
:100
as recommended by the HTTP/2 RFC -
the default HTTP/2 settings values for the others
Logging network server activity
For debugging purposes, network activity can be logged.
options = {
'logActivity' => true
}
server = vertx.create_http_server(options)
See the chapter on logging network activity for a detailed explanation.
Start the Server Listening
To tell the server to listen for incoming requests you use one of the listen
alternatives.
To tell the server to listen at the host and port as specified in the options:
server = vertx.create_http_server()
server.listen()
Or to specify the host and port in the call to listen, ignoring what is configured in the options:
server = vertx.create_http_server()
server.listen(8080, "myhost.com")
The default host is 0.0.0.0
which means 'listen on all available addresses' and the default port is 80
.
The actual bind is asynchronous so the server might not actually be listening until some time after the call to listen has returned.
If you want to be notified when the server is actually listening you can provide a handler to the listen
call.
For example:
server = vertx.create_http_server()
server.listen(8080, "myhost.com") { |res_err,res|
if (res_err == nil)
puts "Server is now listening!"
else
puts "Failed to bind!"
end
}
Getting notified of incoming requests
To be notified when a request arrives you need to set a requestHandler
:
server = vertx.create_http_server()
server.request_handler() { |request|
# Handle the request in here
}
Handling requests
When a request arrives, the request handler is called passing in an instance of HttpServerRequest
.
This object represents the server side HTTP request.
The handler is called when the headers of the request have been fully read.
If the request contains a body, that body will arrive at the server some time after the request handler has been called.
The server request object allows you to retrieve the uri
,
path
, params
and
headers
, amongst other things.
Each server request object is associated with one server response object. You use
response
to get a reference to the HttpServerResponse
object.
Here’s a simple example of a server handling a request and replying with "hello world" to it.
vertx.create_http_server().request_handler() { |request|
request.response().end("Hello world")
}.listen(8080)
Request version
The version of HTTP specified in the request can be retrieved with version
Request method
Use method
to retrieve the HTTP method of the request.
(i.e. whether it’s GET, POST, PUT, DELETE, HEAD, OPTIONS, etc).
Request URI
Use uri
to retrieve the URI of the request.
Note that this is the actual URI as passed in the HTTP request, and it’s almost always a relative URI.
The URI is as defined in Section 5.1.2 of the HTTP specification - Request-URI
Request path
Use path
to return the path part of the URI
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the path would be
/a/b/c/page.html
Request query
Use query
to return the query part of the URI
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the query would be
param1=abc¶m2=xyz
Request headers
Use headers
to return the headers of the HTTP request.
This returns an instance of MultiMap
- which is like a normal Map or Hash but allows multiple
values for the same key - this is because HTTP allows multiple header values with the same key.
It also has case-insensitive keys, that means you can do the following:
headers = request.headers()
# Get the User-Agent:
puts "User agent is #{headers.get("user-agent")}"
# You can also do this and get the same result:
puts "User agent is #{headers.get("User-Agent")}"
Request host
Use host
to return the host of the HTTP request.
For HTTP/1.x requests the host
header is returned, for HTTP/1 requests the :authority
pseudo header is returned.
Request parameters
Use params
to return the parameters of the HTTP request.
Just like headers
this returns an instance of MultiMap
as there can be more than one parameter with the same name.
Request parameters are sent on the request URI, after the path. For example if the URI was:
/page.html?param1=abc¶m2=xyz
Then the parameters would contain the following:
param1: 'abc' param2: 'xyz
Note that these request parameters are retrieved from the URL of the request. If you have form attributes that
have been sent as part of the submission of an HTML form submitted in the body of a multi-part/form-data
request
then they will not appear in the params here.
Remote address
The address of the sender of the request can be retrieved with remoteAddress
.
Absolute URI
The URI passed in an HTTP request is usually relative. If you wish to retrieve the absolute URI corresponding
to the request, you can get it with absoluteURI
End handler
The endHandler
of the request is invoked when the entire request,
including any body has been fully read.
Reading Data from the Request Body
Often an HTTP request contains a body that we want to read. As previously mentioned the request handler is called when just the headers of the request have arrived so the request object does not have a body at that point.
This is because the body may be very large (e.g. a file upload) and we don’t generally want to buffer the entire body in memory before handing it to you, as that could cause the server to exhaust available memory.
To receive the body, you can use the handler
on the request,
this will get called every time a chunk of the request body arrives. Here’s an example:
request.handler() { |buffer|
puts "I have received a chunk of the body of length #{buffer.length()}"
}
The object passed into the handler is a Buffer
, and the handler can be called
multiple times as data arrives from the network, depending on the size of the body.
In some cases (e.g. if the body is small) you will want to aggregate the entire body in memory, so you could do the aggregation yourself as follows:
require 'vertx/buffer'
# Create an empty buffer
totalBuffer = Vertx::Buffer.buffer()
request.handler() { |buffer|
puts "I have received a chunk of the body of length #{buffer.length()}"
totalBuffer.append_buffer(buffer)
}
request.end_handler() { |v|
puts "Full body received, length = #{totalBuffer.length()}"
}
This is such a common case, that Vert.x provides a bodyHandler
to do this
for you. The body handler is called once when all the body has been received:
request.body_handler() { |totalBuffer|
puts "Full body received, length = #{totalBuffer.length()}"
}
Pumping requests
The request object is a ReadStream
so you can pump the request body to any
WriteStream
instance.
See the chapter on streams and pumps for a detailed explanation.
Handling HTML forms
HTML forms can be submitted with either a content type of application/x-www-form-urlencoded
or multipart/form-data
.
For url encoded forms, the form attributes are encoded in the url, just like normal query parameters.
For multi-part forms they are encoded in the request body, and as such are not available until the entire body has been read from the wire.
Multi-part forms can also contain file uploads.
If you want to retrieve the attributes of a multi-part form you should tell Vert.x that you expect to receive
such a form before any of the body is read by calling setExpectMultipart
with true, and then you should retrieve the actual attributes using formAttributes
once the entire body has been read:
server.request_handler() { |request|
request.set_expect_multipart(true)
request.end_handler() { |v|
# The body has now been fully read, so retrieve the form attributes
formAttributes = request.form_attributes()
}
}
Handling form file uploads
Vert.x can also handle file uploads which are encoded in a multi-part request body.
To receive file uploads you tell Vert.x to expect a multi-part form and set an
uploadHandler
on the request.
This handler will be called once for every upload that arrives on the server.
The object passed into the handler is a HttpServerFileUpload
instance.
server.request_handler() { |request|
request.set_expect_multipart(true)
request.upload_handler() { |upload|
puts "Got a file upload #{upload.name()}"
}
}
File uploads can be large we don’t provide the entire upload in a single buffer as that might result in memory exhaustion, instead, the upload data is received in chunks:
request.upload_handler() { |upload|
upload.handler() { |chunk|
puts "Received a chunk of the upload of length #{chunk.length()}"
}
}
The upload object is a ReadStream
so you can pump the request body to any
WriteStream
instance. See the chapter on streams and pumps for a
detailed explanation.
If you just want to upload the file to disk somewhere you can use streamToFileSystem
:
request.upload_handler() { |upload|
upload.stream_to_file_system("myuploads_directory/#{upload.filename()}")
}
Warning
|
Make sure you check the filename in a production system to avoid malicious clients uploading files to arbitrary places on your filesystem. See security notes for more information. |
Receiving custom HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
To receive custom frames, you can use the customFrameHandler
on the request,
this will get called every time a custom frame arrives. Here’s an example:
request.custom_frame_handler() { |frame|
puts "Received a frame type=#{frame.type()} payload#{frame.payload().to_string()}"
}
HTTP/2 frames are not subject to flow control - the frame handler will be called immediatly when a custom frame is received whether the request is paused or is not
Sending back responses
The server response object is an instance of HttpServerResponse
and is obtained from the
request with response
.
You use the response object to write a response back to the HTTP client.
Setting status code and message
The default HTTP status code for a response is 200
, representing OK
.
Use setStatusCode
to set a different code.
You can also specify a custom status message with setStatusMessage
.
If you don’t specify a status message, the default one corresponding to the status code will be used.
Note
|
for HTTP/2 the status won’t be present in the response since the protocol won’t transmit the message to the client |
Writing HTTP responses
To write data to an HTTP response, you use one the write
operations.
These can be invoked multiple times before the response is ended. They can be invoked in a few ways:
With a single buffer:
response = request.response()
response.write(buffer)
With a string. In this case the string will encoded using UTF-8 and the result written to the wire.
response = request.response()
response.write("hello world!")
With a string and an encoding. In this case the string will encoded using the specified encoding and the result written to the wire.
response = request.response()
response.write("hello world!", "UTF-16")
Writing to a response is asynchronous and always returns immediately after the write has been queued.
If you are just writing a single string or buffer to the HTTP response you can write it and end the response in a
single call to the end
The first call to write results in the response header being being written to the response. Consequently, if you are
not using HTTP chunking then you must set the Content-Length
header before writing to the response, since it will
be too late otherwise. If you are using HTTP chunking you do not have to worry.
Ending HTTP responses
Once you have finished with the HTTP response you should end
it.
This can be done in several ways:
With no arguments, the response is simply ended.
response = request.response()
response.write("hello world!")
response.end()
It can also be called with a string or buffer in the same way write
is called. In this case it’s just the same as
calling write with a string or buffer followed by calling end with no arguments. For example:
response = request.response()
response.end("hello world!")
Closing the underlying connection
You can close the underlying TCP connection with close
.
Non keep-alive connections will be automatically closed by Vert.x when the response is ended.
Keep-alive connections are not automatically closed by Vert.x by default. If you want keep-alive connections to be
closed after an idle time, then you configure idleTimeout
.
HTTP/2 connections send a GOAWAY
frame before closing the response.
Setting response headers
HTTP response headers can be added to the response by adding them directly to the
headers
:
response = request.response()
headers = response.headers()
headers.set("content-type", "text/html")
headers.set("other-header", "wibble")
Or you can use putHeader
response = request.response()
response.put_header("content-type", "text/html").put_header("other-header", "wibble")
Headers must all be added before any parts of the response body are written.
Chunked HTTP responses and trailers
Vert.x supports HTTP Chunked Transfer Encoding.
This allows the HTTP response body to be written in chunks, and is normally used when a large response body is being streamed to a client and the total size is not known in advance.
You put the HTTP response into chunked mode as follows:
response = request.response()
response.set_chunked(true)
Default is non-chunked. When in chunked mode, each call to one of the write
methods will result in a new HTTP chunk being written out.
When in chunked mode you can also write HTTP response trailers to the response. These are actually written in the final chunk of the response.
Note
|
chunked response has no effect for an HTTP/2 stream |
To add trailers to the response, add them directly to the trailers
.
response = request.response()
response.set_chunked(true)
trailers = response.trailers()
trailers.set("X-wibble", "woobble").set("X-quux", "flooble")
Or use putTrailer
.
response = request.response()
response.set_chunked(true)
response.put_trailer("X-wibble", "woobble").put_trailer("X-quux", "flooble")
Serving files directly from disk or the classpath
If you were writing a web server, one way to serve a file from disk would be to open it as an AsyncFile
and pump it to the HTTP response.
Or you could load it it one go using readFile
and write it straight to the response.
Alternatively, Vert.x provides a method which allows you to serve a file from disk or the filesystem to an HTTP response in one operation. Where supported by the underlying operating system this may result in the OS directly transferring bytes from the file to the socket without being copied through user-space at all.
This is done by using sendFile
, and is usually more efficient for large
files, but may be slower for small files.
Here’s a very simple web server that serves files from the file system using sendFile:
vertx.create_http_server().request_handler() { |request|
file = ""
if (request.path().==("/"))
file = "index.html"
elsif (!request.path().contains?(".."))
file = request.path()
end
request.response().send_file("web/#{file}")
}.listen(8080)
Sending a file is asynchronous and may not complete until some time after the call has returned. If you want to
be notified when the file has been writen you can use sendFile
Please see the chapter about serving files from the classpath for restrictions about the classpath resolution or disabling it.
Note
|
If you use sendFile while using HTTPS it will copy through user-space, since if the kernel is copying data
directly from disk to socket it doesn’t give us an opportunity to apply any encryption.
|
Warning
|
If you’re going to write web servers directly using Vert.x be careful that users cannot exploit the path to access files outside the directory from which you want to serve them or the classpath It may be safer instead to use Vert.x Web. |
When there is a need to serve just a segment of a file, say starting from a given byte, you can achieve this by doing:
vertx.create_http_server().request_handler() { |request|
offset = 0
begin
offset = Java::JavaLang::Long.parse_long(request.get_param("start"))
rescue
# error handling...
end
end = Java::JavaLang::Long::MAX_VALUE
begin
end = Java::JavaLang::Long.parse_long(request.get_param("end"))
rescue
# error handling...
end
request.response().send_file("web/mybigfile.txt", offset, end)
}.listen(8080)
You are not required to supply the length if you want to send a file starting from an offset until the end, in this case you can just do:
vertx.create_http_server().request_handler() { |request|
offset = 0
begin
offset = Java::JavaLang::Long.parse_long(request.get_param("start"))
rescue
# error handling...
end
request.response().send_file("web/mybigfile.txt", offset)
}.listen(8080)
Pumping responses
The server response is a WriteStream
instance so you can pump to it from any
ReadStream
, e.g. AsyncFile
, NetSocket
,
WebSocket
or HttpServerRequest
.
Here’s an example which echoes the request body back in the response for any PUT methods. It uses a pump for the body, so it will work even if the HTTP request body is much larger than can fit in memory at any one time:
require 'vertx/pump'
vertx.create_http_server().request_handler() { |request|
response = request.response()
if (request.method() == :PUT)
response.set_chunked(true)
Vertx::Pump.pump(request, response).start()
request.end_handler() { |v|
response.end()
}
else
response.set_status_code(400).end()
end
}.listen(8080)
Writing HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
To send such frames, you can use the writeCustomFrame
on the response.
Here’s an example:
require 'vertx/buffer'
frameType = 40
frameStatus = 10
payload = Vertx::Buffer.buffer("some data")
# Sending a frame to the client
response.write_custom_frame(frameType, frameStatus, payload)
These frames are sent immediately and are not subject to flow control - when such frame is sent there it may be done
before other DATA
frames.
Stream reset
HTTP/1.x does not allow a clean reset of a request or a response stream, for example when a client uploads a resource already present on the server, the server needs to accept the entire response.
HTTP/2 supports stream reset at any time during the request/response:
# Reset the stream
request.response().reset()
By default the NO_ERROR
(0) error code is sent, another code can sent instead:
# Cancel the stream
request.response().reset(8)
The HTTP/2 specification defines the list of error codes one can use.
The request handler are notified of stream reset events with the request handler
and
response handler
:
Code not translatable
Server push
Server push is a new feature of HTTP/2 that enables sending multiple responses in parallel for a single client request.
When a server process a request, it can push a request/response to the client:
response = request.response()
# Push main.js to the client
response.push(:GET, "/main.js") { |ar_err,ar|
if (ar_err == nil)
# The server is ready to push the response
pushedResponse = ar
# Send main.js response
pushedResponse.put_header("content-type", "application/json").end("alert(\"Push response hello\")")
else
puts "Could not push client resource #{ar_err}"
end
}
# Send the requested resource
response.send_file("<html><head><script src=\"/main.js\"></script></head><body></body></html>")
When the server is ready to push the response, the push response handler is called and the handler can send the response.
The push response handler may receive a failure, for instance the client may cancel the push because it already has main.js
in its
cache and does not want it anymore.
The push
method must be called before the initiating response ends, however
the pushed response can be written after.
HTTP Compression
Vert.x comes with support for HTTP Compression out of the box.
This means you are able to automatically compress the body of the responses before they are sent back to the client.
If the client does not support HTTP compression the responses are sent back without compressing the body.
This allows to handle Client that support HTTP Compression and those that not support it at the same time.
To enable compression use can configure it with compressionSupported
.
By default compression is not enabled.
When HTTP compression is enabled the server will check if the client includes an Accept-Encoding
header which
includes the supported compressions. Commonly used are deflate and gzip. Both are supported by Vert.x.
If such a header is found the server will automatically compress the body of the response with one of the supported compressions and send it back to the client.
Be aware that compression may be able to reduce network traffic but is more CPU-intensive.
Creating an HTTP client
You create an HttpClient
instance with default options as follows:
client = vertx.create_http_client()
If you want to configure options for the client, you create it as follows:
options = {
'keepAlive' => false
}
client = vertx.create_http_client(options)
Vert.x supports HTTP/2 over TLS h2
and over TCP h2c
.
By default the http client performs HTTP/1.1 requests, to perform HTTP/2 requests the protocolVersion
must be set to HTTP_2
.
For h2
requests, TLS must be enabled with Application-Layer Protocol Negotiation:
options = {
'protocolVersion' => "HTTP_2",
'ssl' => true,
'useAlpn' => true,
'trustAll' => true
}
client = vertx.create_http_client(options)
For h2c
requests, TLS must be disabled, the client will do an HTTP/1.1 requests and try an upgrade to HTTP/2:
options = {
'protocolVersion' => "HTTP_2"
}
client = vertx.create_http_client(options)
h2c
connections can also be established directly, i.e connection started with a prior knowledge, when
http2ClearTextUpgrade
options is set to false: after the
connection is established, the client will send the HTTP/2 connection preface and expect to receive
the same preface from the server.
The http server may not support HTTP/2, the actual version can be checked
with version
when the response arrives.
When a clients connects to an HTTP/2 server, it sends to the server its initial settings
.
The settings define how the server can use the connection, the default initial settings for a client are the default
values defined by the HTTP/2 RFC.
Logging network client activity
For debugging purposes, network activity can be logged.
options = {
'logActivity' => true
}
client = vertx.create_http_client(options)
See the chapter on logging network activity for a detailed explanation.
Making requests
The http client is very flexible and there are various ways you can make requests with it.
Often you want to make many requests to the same host/port with an http client. To avoid you repeating the host/port every time you make a request you can configure the client with a default host/port:
# Set the default host
options = {
'defaultHost' => "wibble.com"
}
# Can also set default port if you want...
client = vertx.create_http_client(options)
client.get_now("/some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
Alternatively if you find yourself making lots of requests to different host/ports with the same client you can simply specify the host/port when doing the request.
client = vertx.create_http_client()
# Specify both port and host name
client.get_now(8080, "myserver.mycompany.com", "/some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
# This time use the default port 80 but specify the host name
client.get_now("foo.othercompany.com", "/other-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
Both methods of specifying host/port are supported for all the different ways of making requests with the client.
Simple requests with no request body
Often, you’ll want to make HTTP requests with no request body. This is usually the case with HTTP GET, OPTIONS and HEAD requests.
The simplest way to do this with the Vert.x http client is using the methods prefixed with Now
. For example
getNow
.
These methods create the http request and send it in a single method call and allow you to provide a handler that will be called with the http response when it comes back.
client = vertx.create_http_client()
# Send a GET request
client.get_now("/some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
# Send a GET request
client.head_now("/other-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
Writing general requests
At other times you don’t know the request method you want to send until run-time. For that use case we provide
general purpose request methods such as request
which allow you to specify
the HTTP method at run-time:
client = vertx.create_http_client()
client.request(:GET, "some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}.end()
client.request(:POST, "foo-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}.end("some-data")
Writing request bodies
Sometimes you’ll want to write requests which have a body, or perhaps you want to write headers to a request before sending it.
To do this you can call one of the specific request methods such as post
or
one of the general purpose request methods such as request
.
These methods don’t send the request immediately, but instead return an instance of HttpClientRequest
which can be used to write to the request body or write headers.
Here are some examples of writing a POST request with a body: m
client = vertx.create_http_client()
request = client.post("some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
# Now do stuff with the request
request.put_header("content-length", "1000")
request.put_header("content-type", "text/plain")
request.write(body)
# Make sure the request is ended when you're done with it
request.end()
# Or fluently:
client.post("some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}.put_header("content-length", "1000").put_header("content-type", "text/plain").write(body).end()
# Or event more simply:
client.post("some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}.put_header("content-type", "text/plain").end(body)
Methods exist to write strings in UTF-8 encoding and in any specific encoding and to write buffers:
require 'vertx/buffer'
# Write string encoded in UTF-8
request.write("some data")
# Write string encoded in specific encoding
request.write("some other data", "UTF-16")
# Write a buffer
buffer = Vertx::Buffer.buffer()
buffer.append_int(123).append_long(245)
request.write(buffer)
If you are just writing a single string or buffer to the HTTP request you can write it and end the request in a
single call to the end
function.
require 'vertx/buffer'
# Write string and end the request (send it) in a single call
request.end("some simple data")
# Write buffer and end the request (send it) in a single call
buffer = Vertx::Buffer.buffer().append_double(12.34).append_long(432)
request.end(buffer)
When you’re writing to a request, the first call to write
will result in the request headers being written
out to the wire.
The actual write is asynchronous and might not occur until some time after the call has returned.
Non-chunked HTTP requests with a request body require a Content-Length
header to be provided.
Consequently, if you are not using chunked HTTP then you must set the Content-Length
header before writing
to the request, as it will be too late otherwise.
If you are calling one of the end
methods that take a string or buffer then Vert.x will automatically calculate
and set the Content-Length
header before writing the request body.
If you are using HTTP chunking a a Content-Length
header is not required, so you do not have to calculate the size
up-front.
Writing request headers
You can write headers to a request using the headers
multi-map as follows:
# Write some headers using the headers() multimap
headers = request.headers()
headers.set("content-type", "application/json").set("other-header", "foo")
The headers are an instance of MultiMap
which provides operations for adding, setting and removing
entries. Http headers allow more than one value for a specific key.
You can also write headers using putHeader
# Write some headers using the putHeader method
request.put_header("content-type", "application/json").put_header("other-header", "foo")
If you wish to write headers to the request you must do so before any part of the request body is written.
Non standard HTTP methods
The OTHER
HTTP method is used for non standard methods, when this method
is used, setRawMethod
must be used to
set the raw method to send to the server.
Ending HTTP requests
Once you have finished with the HTTP request you must end it with one of the end
operations.
Ending a request causes any headers to be written, if they have not already been written and the request to be marked as complete.
Requests can be ended in several ways. With no arguments the request is simply ended:
request.end()
Or a string or buffer can be provided in the call to end
. This is like calling write
with the string or buffer
before calling end
with no arguments
require 'vertx/buffer'
# End the request with a string
request.end("some-data")
# End it with a buffer
buffer = Vertx::Buffer.buffer().append_float(12.3).append_int(321)
request.end(buffer)
Chunked HTTP requests
Vert.x supports HTTP Chunked Transfer Encoding for requests.
This allows the HTTP request body to be written in chunks, and is normally used when a large request body is being streamed to the server, whose size is not known in advance.
You put the HTTP request into chunked mode using setChunked
.
In chunked mode each call to write will cause a new chunk to be written to the wire. In chunked mode there is
no need to set the Content-Length
of the request up-front.
request.set_chunked(true)
# Write some chunks
i = 0
while (i < 10)
request.write("this-is-chunk-#{i}")
i+=1
end
request.end()
Request timeouts
You can set a timeout for a specific http request using setTimeout
.
If the request does not return any data within the timeout period an exception will be passed to the exception handler (if provided) and the request will be closed.
Handling exceptions
You can handle exceptions corresponding to a request by setting an exception handler on the
HttpClientRequest
instance:
request = client.post("some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
request.exception_handler() { |e|
puts "Received exception: #{e.get_message()}"
e.print_stack_trace()
}
This does not handle non 2xx response that need to be handled in the
HttpClientResponse
code:
request = client.post("some-uri") { |response|
if (response.status_code() == 200)
puts "Everything fine"
return
end
if (response.status_code() == 500)
puts "Unexpected behavior on the server side"
return
end
}
request.end()
Important
|
XXXNow methods cannot receive an exception handler.
|
Specifying a handler on the client request
Instead of providing a response handler in the call to create the client request object, alternatively, you can
not provide a handler when the request is created and set it later on the request object itself, using
handler
, for example:
request = client.post("some-uri")
request.handler() { |response|
puts "Received response with status code #{response.status_code()}"
}
Using the request as a stream
The HttpClientRequest
instance is also a WriteStream
which means
you can pump to it from any ReadStream
instance.
For, example, you could pump a file on disk to a http request body as follows:
require 'vertx/pump'
request.set_chunked(true)
pump = Vertx::Pump.pump(file, request)
file.end_handler() { |v|
request.end()
}
pump.start()
Writing HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
To send such frames, you can use the write
on the request. Here’s an example:
require 'vertx/buffer'
frameType = 40
frameStatus = 10
payload = Vertx::Buffer.buffer("some data")
# Sending a frame to the server
request.write_custom_frame(frameType, frameStatus, payload)
Stream reset
HTTP/1.x does not allow a clean reset of a request or a response stream, for example when a client uploads a resource already present on the server, the server needs to accept the entire response.
HTTP/2 supports stream reset at any time during the request/response:
request.reset()
By default the NO_ERROR (0) error code is sent, another code can sent instead:
request.reset(8)
The HTTP/2 specification defines the list of error codes one can use.
The request handler are notified of stream reset events with the request handler
and
response handler
:
Code not translatable
Handling http responses
You receive an instance of HttpClientResponse
into the handler that you specify in of
the request methods or by setting a handler directly on the HttpClientRequest
object.
You can query the status code and the status message of the response with statusCode
and statusMessage
.
client.get_now("some-uri") { |response|
# the status code - e.g. 200 or 404
puts "Status code is #{response.status_code()}"
# the status message e.g. "OK" or "Not Found".
puts "Status message is #{response.status_message()}"
}
Using the response as a stream
The HttpClientResponse
instance is also a ReadStream
which means
you can pump it to any WriteStream
instance.
Response headers and trailers
Http responses can contain headers. Use headers
to get the headers.
The object returned is a MultiMap
as HTTP headers can contain multiple values for single keys.
contentType = response.headers().get("content-type")
contentLength = response.headers().get("content-lengh")
Chunked HTTP responses can also contain trailers - these are sent in the last chunk of the response body.
Reading the request body
The response handler is called when the headers of the response have been read from the wire.
If the response has a body this might arrive in several pieces some time after the headers have been read. We don’t wait for all the body to arrive before calling the response handler as the response could be very large and we might be waiting a long time, or run out of memory for large responses.
As parts of the response body arrive, the handler
is called with
a Buffer
representing the piece of the body:
client.get_now("some-uri") { |response|
response.handler() { |buffer|
puts "Received a part of the response body: #{buffer}"
}
}
If you know the response body is not very large and want to aggregate it all in memory before handling it, you can either aggregate it yourself:
require 'vertx/buffer'
client.get_now("some-uri") { |response|
# Create an empty buffer
totalBuffer = Vertx::Buffer.buffer()
response.handler() { |buffer|
puts "Received a part of the response body: #{buffer.length()}"
totalBuffer.append_buffer(buffer)
}
response.end_handler() { |v|
# Now all the body has been read
puts "Total response body length is #{totalBuffer.length()}"
}
}
Or you can use the convenience bodyHandler
which
is called with the entire body when the response has been fully read:
client.get_now("some-uri") { |response|
response.body_handler() { |totalBuffer|
# Now all the body has been read
puts "Total response body length is #{totalBuffer.length()}"
}
}
Response end handler
The response endHandler
is called when the entire response body has been read
or immediately after the headers have been read and the response handler has been called if there is no body.
Reading cookies from the response
You can retrieve the list of cookies from a response using cookies
.
Alternatively you can just parse the Set-Cookie
headers yourself in the response.
100-Continue handling
According to the HTTP 1.1 specification a client can set a
header Expect: 100-Continue
and send the request header before sending the rest of the request body.
The server can then respond with an interim response status Status: 100 (Continue)
to signify to the client that
it is ok to send the rest of the body.
The idea here is it allows the server to authorise and accept/reject the request before large amounts of data are sent. Sending large amounts of data if the request might not be accepted is a waste of bandwidth and ties up the server in reading data that it will just discard.
Vert.x allows you to set a continueHandler
on the
client request object
This will be called if the server sends back a Status: 100 (Continue)
response to signify that it is ok to send
the rest of the request.
This is used in conjunction with `sendHead`to send the head of the request.
Here’s an example:
request = client.put("some-uri") { |response|
puts "Received response with status code #{response.status_code()}"
}
request.put_header("Expect", "100-Continue")
request.continue_handler() { |v|
# OK to send rest of body
request.write("Some data")
request.write("Some more data")
request.end()
}
On the server side a Vert.x http server can be configured to automatically send back 100 Continue interim responses
when it receives an Expect: 100-Continue
header.
This is done by setting the option handle100ContinueAutomatically
.
If you’d prefer to decide whether to send back continue responses manually, then this property should be set to
false
(the default), then you can inspect the headers and call writeContinue
to have the client continue sending the body:
httpServer.request_handler() { |request|
if (request.get_header("Expect").equals_ignore_case?("100-Continue"))
# Send a 100 continue response
request.response().write_continue()
# The client should send the body when it receives the 100 response
request.body_handler() { |body|
# Do something with body
}
request.end_handler() { |v|
request.response().end()
}
end
}
You can also reject the request by sending back a failure status code directly: in this case the body should either be ignored or the connection should be closed (100-Continue is a performance hint and cannot be a logical protocol constraint):
httpServer.request_handler() { |request|
if (request.get_header("Expect").equals_ignore_case?("100-Continue"))
#
rejectAndClose = true
if (rejectAndClose)
# Reject with a failure code and close the connection
# this is probably best with persistent connection
request.response().set_status_code(405).put_header("Connection", "close").end()
else
# Reject with a failure code and ignore the body
# this may be appropriate if the body is small
request.response().set_status_code(405).end()
end
end
}
Client push
Server push is a new feature of HTTP/2 that enables sending multiple responses in parallel for a single client request.
A push handler can be set on a request to receive the request/response pushed by the server:
request = client.get("/index.html") { |response|
# Process index.html response
}
# Set a push handler to be aware of any resource pushed by the server
request.push_handler() { |pushedRequest|
# A resource is pushed for this request
puts "Server pushed #{pushedRequest.path()}"
# Set an handler for the response
pushedRequest.handler() { |pushedResponse|
puts "The response for the pushed request"
}
}
# End the request
request.end()
If the client does not want to receive a pushed request, it can reset the stream:
request.push_handler() { |pushedRequest|
if (pushedRequest.path().==("/main.js"))
pushedRequest.reset()
else
# Handle it
end
}
When no handler is set, any stream pushed will be automatically cancelled by the client with
a stream reset (8
error code).
Receiving custom HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
To receive custom frames, you can use the customFrameHandler on the request, this will get called every time a custom frame arrives. Here’s an example:
response.custom_frame_handler() { |frame|
puts "Received a frame type=#{frame.type()} payload#{frame.payload().to_string()}"
}
Enabling compression on the client
The http client comes with support for HTTP Compression out of the box.
This means the client can let the remote http server know that it supports compression, and will be able to handle compressed response bodies.
An http server is free to either compress with one of the supported compression algorithms or to send the body back without compressing it at all. So this is only a hint for the Http server which it may ignore at will.
To tell the http server which compression is supported by the client it will include an Accept-Encoding
header with
the supported compression algorithm as value. Multiple compression algorithms are supported. In case of Vert.x this
will result in the following header added:
Accept-Encoding: gzip, deflate
The server will choose then from one of these. You can detect if a server ompressed the body by checking for the
Content-Encoding
header in the response sent back from it.
If the body of the response was compressed via gzip it will include for example the following header:
Content-Encoding: gzip
To enable compression set tryUseCompression
on the options
used when creating the client.
By default compression is disabled.
HTTP/1.x pooling and keep alive
Http keep alive allows http connections to be used for more than one request. This can be a more efficient use of connections when you’re making multiple requests to the same server.
For HTTP/1.x versions, the http client supports pooling of connections, allowing you to reuse connections between requests.
For pooling to work, keep alive must be true using keepAlive
on the options used when configuring the client. The default value is true.
When keep alive is enabled. Vert.x will add a Connection: Keep-Alive
header to each HTTP/1.0 request sent.
When keep alive is disabled. Vert.x will add a Connection: Close
header to each HTTP/1.1 request sent to signal
that the connection will be closed after completion of the response.
The maximum number of connections to pool for each server is configured using maxPoolSize
When making a request with pooling enabled, Vert.x will create a new connection if there are less than the maximum number of connections already created for that server, otherwise it will add the request to a queue.
Keep alive connections will not be closed by the client automatically. To close them you can close the client instance.
Alternatively you can set idle timeout using idleTimeout
- any
connections not used within this timeout will be closed. Please note the idle timeout value is in seconds not milliseconds.
HTTP/1.1 pipe-lining
The client also supports pipe-lining of requests on a connection.
Pipe-lining means another request is sent on the same connection before the response from the preceding one has returned. Pipe-lining is not appropriate for all requests.
To enable pipe-lining, it must be enabled using pipelining
.
By default pipe-lining is disabled.
When pipe-lining is enabled requests will be written to connections without waiting for previous responses to return.
The number of pipe-lined requests over a single connection is limited by pipeliningLimit
.
This option defines the maximum number of http requests sent to the server awaiting for a response. This limit ensures the
fairness of the distribution of the client requests over the connections to the same server.
HTTP/2 multiplexing
HTTP/2 advocates to use a single connection to a server, by default the http client uses a single connection for each server, all the streams to the same server are multiplexed over the same connection.
When the clients needs to use more than a single connection and use pooling, the http2MaxPoolSize
shall be used.
When it is desirable to limit the number of multiplexed streams per connection and use a connection
pool instead of a single connection, http2MultiplexingLimit
can be used.
clientOptions = {
'http2MultiplexingLimit' => 10,
'http2MaxPoolSize' => 3
}
# Uses up to 3 connections and up to 10 streams per connection
client = vertx.create_http_client(clientOptions)
The multiplexing limit for a connection is a setting set on the client that limits the number of streams
of a single connection. The effective value can be even lower if the server sets a lower limit
with the SETTINGS_MAX_CONCURRENT_STREAMS
setting.
HTTP/2 connections will not be closed by the client automatically. To close them you can call close
or close the client instance.
Alternatively you can set idle timeout using idleTimeout
- any
connections not used within this timeout will be closed. Please note the idle timeout value is in seconds not milliseconds.
HTTP connections
The HttpConnection
offers the API for dealing with HTTP connection events, lifecycle
and settings.
HTTP/2 implements fully the HttpConnection
API.
HTTP/1.x implements partially the HttpConnection
API: only the close operation,
the close handler and exception handler are implemented. This protocol does not provide semantics for
the other operations.
Server connections
The connection
method returns the request connection on the server:
connection = request.connection()
A connection handler can be set on the server to be notified of any incoming connection:
server = vertx.create_http_server(http2Options)
server.connection_handler() { |connection|
puts "A client connected"
}
Client connections
The connection
method returns the request connection on the client:
connection = request.connection()
A connection handler can be set on the request to be notified when the connection happens:
request.connection_handler() { |connection|
puts "Connected to the server"
}
Connection settings
The configuration of an HTTP/2 is configured by the Http2Settings
data object.
Each endpoint must respect the settings sent by the other side of the connection.
When a connection is established, the client and the server exchange initial settings. Initial settings
are configured by initialSettings
on the client and
initialSettings
on the server.
The settings can be changed at any time after the connection is established:
connection.update_settings({
'maxConcurrentStreams' => 100
})
As the remote side should acknowledge on reception of the settings update, it’s possible to give a callback to be notified of the acknowledgment:
connection.update_settings({
'maxConcurrentStreams' => 100
}) { |ar_err,ar|
if (ar_err == nil)
puts "The settings update has been acknowledged "
end
}
Conversely the remoteSettingsHandler
is notified
when the new remote settings are received:
connection.remote_settings_handler() { |settings|
puts "Received new settings"
}
Note
|
this only applies to the HTTP/2 protocol |
Connection ping
HTTP/2 connection ping is useful for determining the connection round-trip time or check the connection
validity: ping
sends a PING
frame to the remote
endpoint:
require 'vertx/buffer'
data = Vertx::Buffer.buffer()
i = 0
while (i < 8)
data.append_byte(i)
i+=1
end
connection.ping(data) { |pong_err,pong|
puts "Remote side replied"
}
Vert.x will send automatically an acknowledgement when a PING
frame is received,
an handler can be set to be notified for each ping received:
connection.ping_handler() { |ping|
puts "Got pinged by remote side"
}
The handler is just notified, the acknowledgement is sent whatsoever. Such feature is aimed for implementing protocols on top of HTTP/2.
Note
|
this only applies to the HTTP/2 protocol |
Connection shutdown and go away
Calling shutdown
will send a GOAWAY
frame to the
remote side of the connection, asking it to stop creating streams: a client will stop doing new requests
and a server will stop pushing responses. After the GOAWAY
frame is sent, the connection
waits some time (30 seconds by default) until all current streams closed and close the connection:
connection.shutdown()
The shutdownHandler
notifies when all streams have been closed, the
connection is not yet closed.
It’s possible to just send a GOAWAY
frame, the main difference with a shutdown is that
it will just tell the remote side of the connection to stop creating new streams without scheduling a connection
close:
connection.go_away(0)
Conversely, it is also possible to be notified when GOAWAY
are received:
connection.go_away_handler() { |goAway|
puts "Received a go away frame"
}
The shutdownHandler
will be called when all current streams
have been closed and the connection can be closed:
connection.go_away(0)
connection.shutdown_handler() { |v|
# All streams are closed, close the connection
connection.close()
}
This applies also when a GOAWAY
is received.
Note
|
this only applies to the HTTP/2 protocol |
Connection close
Connection close
closes the connection:
-
it closes the socket for HTTP/1.x
-
a shutdown with no delay for HTTP/2, the
GOAWAY
frame will still be sent before the connection is closed. *
The closeHandler
notifies when a connection is closed.
HttpClient usage
The HttpClient can be used in a Verticle or embedded.
When used in a Verticle, the Verticle should use its own client instance.
More generally a client should not be shared between different Vert.x contexts as it can lead to unexpected behavior.
For example a keep-alive connection will call the client handlers on the context of the request that opened the connection, subsequent requests will use the same context.
When this happen Vert.x detects it and log a warn:
Reusing a connection with a different context: an HttpClient is probably shared between different Verticles
The HttpClient can be embedded in a non Vert.x thread like a unit test or a plain java main
: the client handlers
will be called by different Vert.x threads and contexts, such contexts are created as needed. For production this
usage is not recommended.
Server sharing
When several HTTP servers listen on the same port, vert.x orchestrates the request handling using a round-robin strategy.
Let’s take a verticle creating a HTTP server such as:
vertx.create_http_server().request_handler() { |request|
request.response().end("Hello from server #{self}")
}.listen(8080)
This service is listening on the port 8080. So, when this verticle is instantiated multiple times as with:
vertx run io.vertx.examples.http.sharing.HttpServerVerticle -instances 2
, what’s happening ? If both
verticles would bind to the same port, you would receive a socket exception. Fortunately, vert.x is handling
this case for you. When you deploy another server on the same host and port as an existing server it doesn’t
actually try and create a new server listening on the same host/port. It binds only once to the socket. When
receiving a request it calls the server handlers following a round robin strategy.
Let’s now imagine a client such as:
vertx.set_periodic(100) { |l|
vertx.create_http_client().get_now(8080, "localhost", "/") { |resp|
resp.body_handler() { |body|
puts body.to_string("ISO-8859-1")
}
}
}
Vert.x delegates the requests to one of the server sequentially:
Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
...
Consequently the servers can scale over available cores while each Vert.x verticle instance remains strictly single threaded, and you don’t have to do any special tricks like writing load-balancers in order to scale your server on your multi-core machine.
Using HTTPS with Vert.x
Vert.x http servers and clients can be configured to use HTTPS in exactly the same way as net servers.
Please see configuring net servers to use SSL for more information.
WebSockets
WebSockets are a web technology that allows a full duplex socket-like connection between HTTP servers and HTTP clients (typically browsers).
Vert.x supports WebSockets on both the client and server-side.
WebSockets on the server
There are two ways of handling WebSockets on the server side.
WebSocket handler
The first way involves providing a websocketHandler
on the server instance.
When a WebSocket connection is made to the server, the handler will be called, passing in an instance of
ServerWebSocket
.
server.websocket_handler() { |websocket|
puts "Connected!"
}
You can choose to reject the WebSocket by calling reject
.
server.websocket_handler() { |websocket|
if (websocket.path().==("/myapi"))
websocket.reject()
else
# Do something
end
}
Upgrading to WebSocket
The second way of handling WebSockets is to handle the HTTP Upgrade request that was sent from the client, and
call upgrade
on the server request.
server.request_handler() { |request|
if (request.path().==("/myapi"))
websocket = request.upgrade()
# Do something
else
# Reject
request.response().set_status_code(400).end()
end
}
The server WebSocket
The ServerWebSocket
instance enables you to retrieve the headers
,
path
, query
and
URI
of the HTTP request of the WebSocket handshake.
WebSockets on the client
The Vert.x HttpClient
supports WebSockets.
You can connect a WebSocket to a server using one of the websocket
operations and
providing a handler.
The handler will be called with an instance of WebSocket
when the connection has been made:
client.websocket("/some-uri") { |websocket|
puts "Connected!"
}
Writing messages to WebSockets
If you wish to write a single binary WebSocket message to the WebSocket you can do this with
writeBinaryMessage
:
require 'vertx/buffer'
# Write a simple message
buffer = Vertx::Buffer.buffer().append_int(123).append_float(1.23)
websocket.write_binary_message(buffer)
If the WebSocket message is larger than the maximum websocket frame size as configured with
maxWebsocketFrameSize
then Vert.x will split it into multiple WebSocket frames before sending it on the wire.
Writing frames to WebSockets
A WebSocket message can be composed of multiple frames. In this case the first frame is either a binary or text frame followed by zero or more continuation frames.
The last frame in the message is marked as final.
To send a message consisting of multiple frames you create frames using
WebSocketFrame.binaryFrame
, WebSocketFrame.textFrame
or
WebSocketFrame.continuationFrame
and write them
to the WebSocket using writeFrame
.
Here’s an example for binary frames:
require 'vertx/web_socket_frame'
frame1 = Vertx::WebSocketFrame.binary_frame(buffer1, false)
websocket.write_frame(frame1)
frame2 = Vertx::WebSocketFrame.continuation_frame(buffer2, false)
websocket.write_frame(frame2)
# Write the final frame
frame3 = Vertx::WebSocketFrame.continuation_frame(buffer2, true)
websocket.write_frame(frame3)
In many cases you just want to send a websocket message that consists of a single final frame, so we provide a couple
of shortcut methods to do that with writeFinalBinaryFrame
and writeFinalTextFrame
.
Here’s an example:
require 'vertx/buffer'
# Send a websocket messages consisting of a single final text frame:
websocket.write_final_text_frame("Geronimo!")
# Send a websocket messages consisting of a single final binary frame:
buff = Vertx::Buffer.buffer().append_int(12).append_string("foo")
websocket.write_final_binary_frame(buff)
Reading frames from WebSockets
To read frames from a WebSocket you use the frameHandler
.
The frame handler will be called with instances of WebSocketFrame
when a frame arrives,
for example:
websocket.frame_handler() { |frame|
puts "Received a frame of size!"
}
Closing WebSockets
Use close
to close the WebSocket connection when you have finished with it.
Streaming WebSockets
The WebSocket
instance is also a ReadStream
and a
WriteStream
so it can be used with pumps.
When using a WebSocket as a write stream or a read stream it can only be used with WebSockets connections that are used with binary frames that are no split over multiple frames.
Using a proxy for HTTPS connections
The http client supports accessing https servers via a HTTPS proxy (HTTP/1.x CONNECT method, e.g. Squid) or SOCKS4a or SOCKS5 proxy. The http proxy protocol uses HTTP/1.x but can connect to HTTP/1.x and HTTP/2 servers.
The proxy can be configured in the HttpClientOptions
by setting a
ProxyOptions
object containing proxy type, hostname, port and optionally username and password.
Here’s an example:
options = {
'proxyOptions' => {
'type' => "HTTP",
'host' => "localhost",
'port' => 3128,
'username' => "username",
'password' => "secret"
}
}
client = vertx.create_http_client(options)
or using SOCKS5 proxy
options = {
'proxyOptions' => {
'type' => "SOCKS5",
'host' => "localhost",
'port' => 1080,
'username' => "username",
'password' => "secret"
}
}
client = vertx.create_http_client(options)
The DNS resolution is always done on the proxy server, to achieve the functionality of a SOCKS4 client, it is necessary to resolve the DNS address locally.
Automatic clean-up in verticles
If you’re creating http servers and clients from inside verticles, those servers and clients will be automatically closed when the verticle is undeployed.