Boost asio, single TCP server, many clients(Boost asio,单个 TCP 服务器,多个客户端)
问题描述
我正在创建一个 TCP 服务器,它将使用 boost asio,它将接受来自许多客户端的连接、接收数据并发送确认.问题是我希望能够接受所有客户,但我一次只想与一个客户合作.我希望所有其他事务都保留在队列中.
I am creating a TCP server that will use boost asio which will accept connections from many clients, receive data, and send confirmations. The thing is that I want to be able to accept all the clients but I want to work only with one at a time. I want all the other transactions to be kept in a queue.
示例:
- Client1 连接
- Client2 连接
- Client1 发送数据并请求回复
- Client2 发送数据并请求回复
- Client2 的请求被放入队列
- Client1 的数据被读取,服务器回复,事务结束
- Client2 的请求从队列中取出,服务器读取数据,回复事务结束.
所以这介于异步服务器和阻塞服务器之间.我只想一次做一件事,但同时我希望能够将所有客户端套接字及其需求存储在队列中.
So this is something between asynchronous server and blocking server. I want to do just 1 thing at once but at the same time I want to be able to store all client sockets and their demands in the queue.
我能够使用我需要的所有功能创建服务器-客户端通信,但仅限于单线程.一旦客户端断开连接,服务器也会终止.我真的不知道如何开始实施我上面提到的内容.每次连接被接受时我应该打开新线程吗?我应该使用 async_accept 还是阻塞接受?
I was able to create server-client communication with all the functionality that I need but only on single thread. Once client disconnects server is terminated as well. I don't really know how to start implementing what I have mentioned above. Should I open new thread each time connection is accepted? Should I use async_accept or blocking accept?
我已经阅读了 boost::asio 聊天示例,其中许多客户端连接到单个服务器,但这里没有我需要的排队机制.
I have read boost::asio chat example, where many clients connect so single server, but there is no queuing mechanism that I need here.
我知道这篇文章可能有点令人困惑,但 TCP 服务器对我来说是新的,所以我对术语还不够熟悉.也没有要发布的源代码,因为我只是在寻求有关此项目概念的帮助.
I am aware that this post might be a bit confusing but TCP servers are new to me so I am not familiar enough with the terminology. There is also no source code to post because I am asking only for help with concept of this project.
推荐答案
继续接受.
您没有显示代码,但通常看起来像
You show no code, but it typically looks like
void do_accept() {
acceptor_.async_accept(socket_, [this](boost::system::error_code ec) {
std::cout << "async_accept -> " << ec.message() << "
";
if (!ec) {
std::make_shared<Connection>(std::move(socket_))->start();
do_accept(); // THIS LINE
}
});
}
如果您不包括标记为 //THIS LINE
的行,您确实不会接受超过 1 个连接.
If you don't include the line marked // THIS LINE
you will indeed not accept more than 1 connection.
如果这没有帮助,请提供一些我们可以使用的代码.
If this doesn't help, please include some code we can work from.
这仅使用非网络部分的标准库功能.
This uses just standard library features for the non-network part.
网络部分如前所述:
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <istream>
using namespace std::chrono_literals;
using Clock = std::chrono::high_resolution_clock;
namespace Shared {
using PostRequest = std::function<void(std::istream& is)>;
}
namespace Network {
namespace ba = boost::asio;
using ba::ip::tcp;
using error_code = boost::system::error_code;
using Shared::PostRequest;
struct Connection : std::enable_shared_from_this<Connection> {
Connection(tcp::socket&& s, PostRequest poster) : _s(std::move(s)), _poster(poster) {}
void process() {
auto self = shared_from_this();
ba::async_read(_s, _request, [this,self](error_code ec, size_t) {
if (!ec || ec == ba::error::eof) {
std::istream reader(&_request);
_poster(reader);
}
});
}
private:
tcp::socket _s;
ba::streambuf _request;
PostRequest _poster;
};
struct Server {
Server(unsigned port, PostRequest poster) : _port(port), _poster(poster) {}
void run_for(Clock::duration d = 30s) {
_stop.expires_from_now(d);
_stop.async_wait([this](error_code ec) { if (!ec) _svc.post([this] { _a.close(); }); });
_a.listen();
do_accept();
_svc.run();
}
private:
void do_accept() {
_a.async_accept(_s, [this](error_code ec) {
if (!ec) {
std::make_shared<Connection>(std::move(_s), _poster)->process();
do_accept();
}
});
}
unsigned short _port;
PostRequest _poster;
ba::io_service _svc;
ba::high_resolution_timer _stop { _svc };
tcp::acceptor _a { _svc, tcp::endpoint {{}, _port } };
tcp::socket _s { _svc };
};
}
到工作服务部分的唯一连接"是在构造时传递给服务器的 PostRequest
处理程序:
The only "connection" to the work service part is the PostRequest
handler that is passed to the server at construction:
Network::Server server(6767, handler);
我也选择了异步操作,所以我们可以有一个计时器来停止服务,即使我们不使用任何线程:
I've also opted for async operations, so we can have a timer to stop the service, even though we do not use any threads:
server.run_for(3s); // this blocks
工作部分
这是完全独立的,将使用线程.首先,让我们定义一个Request
,和一个线程安全的Queue
:
namespace Service {
struct Request {
std::vector<char> data; // or whatever you read from the sockets...
};
Request parse_request(std::istream& is) {
Request result;
result.data.assign(std::istream_iterator<char>(is), {});
return result;
}
struct Queue {
Queue(size_t max = 50) : _max(max) {}
void enqueue(Request req) {
std::unique_lock<std::mutex> lk(mx);
cv.wait(lk, [this] { return _queue.size() < _max; });
_queue.push_back(std::move(req));
cv.notify_one();
}
Request dequeue(Clock::time_point deadline) {
Request req;
{
std::unique_lock<std::mutex> lk(mx);
_peak = std::max(_peak, _queue.size());
if (cv.wait_until(lk, deadline, [this] { return _queue.size() > 0; })) {
req = std::move(_queue.front());
_queue.pop_front();
cv.notify_one();
} else {
throw std::range_error("dequeue deadline");
}
}
return req;
}
size_t peak_depth() const {
std::lock_guard<std::mutex> lk(mx);
return _peak;
}
private:
mutable std::mutex mx;
mutable std::condition_variable cv;
size_t _max = 50;
size_t _peak = 0;
std::deque<Request> _queue;
};
这没什么特别的,实际上还没有使用线程.让我们创建一个接受队列引用的工作函数(如果需要,可以启动 1 个以上的工作):
This is nothing special, and doesn't actually use threads yet. Let's make a worker function that accepts a reference to a queue (more than 1 worker can be started if so desired):
void worker(std::string name, Queue& queue, Clock::duration d = 30s) {
auto const deadline = Clock::now() + d;
while(true) try {
auto r = queue.dequeue(deadline);
(std::cout << "Worker " << name << " handling request '").write(r.data.data(), r.data.size()) << "'
";
}
catch(std::exception const& e) {
std::cout << "Worker " << name << " got " << e.what() << "
";
break;
}
}
}
main
驱动
这里是 Queue 被实例化并且网络服务器和一些工作线程都启动的地方:
The main
Driver
Here's where the Queue gets instantiated and both the network server as well as some worker threads are started:
int main() {
Service::Queue queue;
auto handler = [&](std::istream& is) {
queue.enqueue(Service::parse_request(is));
};
Network::Server server(6767, handler);
std::vector<std::thread> pool;
pool.emplace_back([&queue] { Service::worker("one", queue, 6s); });
pool.emplace_back([&queue] { Service::worker("two", queue, 6s); });
server.run_for(3s); // this blocks
for (auto& thread : pool)
if (thread.joinable())
thread.join();
std::cout << "Maximum queue depth was " << queue.peak_depth() << "
";
}
现场演示
观看 Coliru 直播
测试负载如下所示:
for a in "hello world" "the quick" "brown fox" "jumped over" "the pangram" "bye world"
do
netcat 127.0.0.1 6767 <<< "$a" || echo "not sent: '$a'"&
done
wait
它会打印如下内容:
Worker one handling request 'brownfox'
Worker one handling request 'thepangram'
Worker one handling request 'jumpedover'
Worker two handling request 'Worker helloworldone handling request 'byeworld'
Worker one handling request 'thequick'
'
Worker one got dequeue deadline
Worker two got dequeue deadline
Maximum queue depth was 6
这篇关于Boost asio,单个 TCP 服务器,多个客户端的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Boost asio,单个 TCP 服务器,多个客户端
基础教程推荐
- 使用从字符串中提取的参数调用函数 2022-01-01
- 如何使图像调整大小以在 Qt 中缩放? 2021-01-01
- 如何“在 Finder 中显示"或“在资源管理器中显 2021-01-01
- 为 C/C++ 中的项目的 makefile 生成依赖项 2022-01-01
- 从 std::cin 读取密码 2021-01-01
- 如何在不破坏 vtbl 的情况下做相当于 memset(this, ...) 的操作? 2022-01-01
- 在 C++ 中循环遍历所有 Lua 全局变量 2021-01-01
- Windows Media Foundation 录制音频 2021-01-01
- 管理共享内存应该分配多少内存?(助推) 2022-12-07
- 为什么语句不能出现在命名空间范围内? 2021-01-01