Http read timeout error



Error(-11): read Timeout for long JSON in POST requests #7246

Comments

santichente commented Apr 24, 2020 •

Basic Infos

  • [ X] This issue complies with the issue POLICY doc.
  • [X ] I have read the documentation at readthedocs and the issue is not addressed there.
  • [ X] I have tested that the issue is present in current master branch (aka latest git).
  • [ X] I have searched the issue tracker for a similar issue.
  • If there is a stack dump, I have decoded it.
  • [X ] I have filled out all fields below.

Platform

  • Hardware: [ESP-12|ESP-01|ESP-07|ESP8285 device|other]
  • Core Version: [latest git hash or date]
  • Development Env: [Arduino IDE|Platformio|Make|other]
  • Operating System: [Windows|Ubuntu|MacOS]

Settings in IDE

  • Module: [Generic ESP8266 Module|Wemos D1 mini r2|Nodemcu|other]
  • Flash Mode: [qio|dio|other]
  • Flash Size: [4MB/1MB]
  • lwip Variant: [v1.4|v2 Lower Memory|Higher Bandwidth]
  • Reset Method: [ck|nodemcu]
  • Flash Frequency: [40Mhz]
  • CPU Frequency: [80Mhz|160MHz]
  • Upload Using: [OTA|SERIAL]
  • Upload Speed: [115200|other] (serial upload only)

Problem Description

Hi, I send an HTTP POST request with a JSON to my server and it works, but when I tried to send a longer JSON POST with a size 2940 of capacity or as with 1666 characters, or more, the response code is (-11) read timeout. I tried to change the CPU frequency, other features of the module, and the set time out, but didn’t work fine. I send the same HTTP POST request in POSTMAN and this works correctly.
Thanks

MCVE Sketch

Debug Messages

WiFi connected
IP address:
192.168.0.3
[HTTP-Client][begin] url: http://development.plantit.io/device/getState/?device_id=1
[HTTP-Client][begin] host: development.plantit.io port: 80 url: /device/getState/?device_id=1
Post!
[HTTP-Client][sendRequest] type: ‘POST’ redirCount: 0
[HTTP-Client] connected to development.plantit.io:80
[HTTP-Client] sending request header

POST /device/getState/?device_id=1 HTTP/1.1
Host: development.plantit.io
User-Agent: ESP8266HTTPClient
Connection: keep-alive
Accept-Encoding: identity;q=1,chunked;q=0.1,*;q=0
Content-Length: 1465

[HTTP-Client][returnError] error(-11): read Timeout
[HTTP-Client][returnError] tcp stop
[HTTP] POST. failed, error: read Timeout
[HTTP-Client][end] tcp is closed

The text was updated successfully, but these errors were encountered:

Источник

‘read timeout reached’ error issue #620

Comments

yechanpark commented Aug 7, 2019 •

Problem

600GB in 24 hours. (calculated by only string log size)
and my td-agent has two responsibility.

  • storing string logs as files
  • storing string logs as elasticsearch documents (of course, indicec has mapping)

file logs content is the same as elasticsearch documents. (1:1 match)
and my elasticsearch node is only 1(master, data, coordination all in one, this can be scaled out after day).

cpu usage avg is 10

20%.
8core, 64GB ram.

sometimes, when i push documents in elasticsearch via fluentd, ‘read timeout reached’ error caused.

and after a while, ‘retry succeed’ log has caused and input success.

but above error and warning is caused everytime.

in this time, buffer never flushed. just append as buffer files.

and after a while, buffer files too many created to input elasticsearch.

as a result, buffer files are infinitely increasing.

buffer files create speed >>>>> buffer files flush speed

it seems not just input-output data speed problem.

following is td-agent.log

08:55:16 , ‘could not push logs to Elasticsearch cluster (<:host=>«127.0.0.1», :port=>9200, :scheme=>»http»>): read timeout reached’ error has caused.

so, in this time, buffer has not been flushed.

in this time, destination index documents count was not increased.
(i check this via kibana)

this state is maintained until 08:55:20 and buffer files are continually created.

and 08:55:20 , there is retry succeeded log has been occurred.

in this time, i checked buffer files directory, and buffer files was flushed in destination index
(i check this via kibana)

but buffer files flushed a liitle, and after a while, read timeout reached occurred.

Steps to replicate

Expected Behavior or What you need to ask

how can i handle this error? any idea??

i tried request_timeout and

but this options cant help me now.

request_timeout never given more then 10s . i will try more bigger value (example, 20s ).
(but i am worried about this. i will try this later.)

and chunk_limit_size is 51m now, because log send server’s forward chunk limit size is 50m .
(forwarding server to receiving server flush_interval 1s,
receiving server to elasticsearch flush_interval 1s.)

Читайте также:  Requested with xmlhttprequest error

i set receiving server’s chunk_limit_size as 4m before.
but if forwarding server’s data’s chunk size exceed 4m , my receiving server’s td-agent occurred following error.

Using Fluentd and ES plugin versions

Fluentd : 1.3.3 (td-agent 3)
ES : 6.7

OS version
CentOS 7.6

Bare Metal or within Docker or Kubernetes or others?
neither Docker nor Kubernetes

Fluentd v0.12 or v0.14/v1.0

  • paste result of fluentd —version or td-agent —version
    td-agent 1.3.3

ES plugin 3.x.y/2.x.y or 1.x.y

  • paste boot log of fluentd or td-agent
  • paste result of fluent-gem list , td-agent-gem list or your Gemfile.lock
    elasticsearch (6.1.0)
    elasticsearch-api (6.1.0)
    elasticsearch-transport (6.1.0)
    fluent-plugin-elasticsearch (3.0.1)

ES version (optional)
ES 6.7

ES template(s) (optional)

The text was updated successfully, but these errors were encountered:

cosmo0920 commented Aug 8, 2019

how can i handle this error? any idea??

Could you share your error log before the below messages?

You issue report says that just occurring flushing buffer failure and retry succeeded chunk for 58f650dc7d3fb60fde13ad3f6a8a39d6 .

yechanpark commented Aug 8, 2019 •

I solved this problem.

exactly, i didn’t solved ‘read timeout reached’ problem.

but, no more buffer stacked. (both sender and receiver server)

the problem is my td-agent has two heavy reponsibility.

  • storing string log as file logs
  • storing string log as elasitcsearch documents

these logs are same each other. (1:1 match)

as a result, td-agent has been bottlenecked.

  • sender server
    sender server’s td-agent success request intermittently, so td-agent retry everytime.
    sometimes request success, sometimes request failed.

in this time, sender server’s td-agent buffer had been stacked, and overflow.
after 72h later, all buffer files are moved to secondary directories.

  • receive server
    td-agent cant received sender server’s request perfectly (because bottlenecked)
    but not killed process yet.
    indeed, receive server’s td-agent receiving all of senders request.

receive server flushing buffer hardly, but there are too many responsbilities, so it’s entire performance is low.

following is what i tried.

  1. do not store logs as elasticsearch documents, but file.
    working perfectly, receive server’s td-agent buffer will be increased, but not faster then buffer flush speed.
    buffer files create speed buffer files flush speed

as a result, storing only files strategy is working.

  1. do not store logs as file, but elasticsearch documents
    working perfectly, receive server’s td-agent buffer will be increased, but not faster then buffer flush speed. (the same as storing files)
    buffer files create speed buffer files flush speed

as a result, storing only elasticsearch documents strategy is working.

my ES cluster and td-agent are stabilized now.
i will separate this responsibility.

but , still ‘read timeout reached’ error occurred.

if someone under this situation, check my suggestion.

  1. how many heavy responsiblities has your td-agent?
  • if your td-agent has a lot of heavy reponsibilities, separate this to other server’s td-agent.
  • for example, server1 will be save only file, server2 will be save only elasticsearch.
  • for example, if you can’t separate responsibility, approach other strategy. td-agent storing only file, and make your own scirpt for using elasticsearch _bulk API. and running script that read files and use _bulk API to insert documents in elasticsearch. in this case, if you can handle concurrency control for thread-safety operation about file access, you can use not only thread but also use multi-process.
  1. how many running td-agent node?
  • generally, td-agent process will be exist only one in one node. td-agent support thread numbers. but as i posted, just increasing thread number is not can be solution.

closing issue, and i wish someone solve the same situation via this issue.

Источник

Read timeout error, Write error: Broken pipe

RemoteXY community → Need help → Read timeout error, Write error: Broken pipe

Читайте также:  Dm verity error samsung

You must login or register to post a reply

Posts: 8

1 Topic by bukinay1 2021-03-31 08:52:04

  • bukinay1
  • Member
  • Offline
  • Registered: 2021-02-14
  • Posts: 31

Topic: Read timeout error, Write error: Broken pipe

Поясните, как избавиться от ошибок: Read timeout error и Write error: Broken pipe

Имеем Arduino_pro_mini + ESP-01. Подключение Enernet через WI-FI
Работа.
Организован цикл 5 сек. В течение первой секунды происходит измерение от датчиков температуры DS18b20, далее вычисление по алгоритму программы и вывод данных по RemoteXY и дальнейшее ожидание до завершение 5 сек цикла.
Функции delay() не использую.
При подключении через USB кабель все работает.
Привожу лог подключения:

1 10:20:14.995 App version 4.7.13, API 28
2 10:20:14.995 Device started
3 10:20:14.995 Ethernet connection started
4 10:20:15.000 WiFi network bound (129)
5 10:20:15.000 Connecting to 192.168.1.14:6377.
6 10:20:15.252 Connecting to 192.168.1.14:6377.
7 10:20:15.264 Connection established
8 10:20:15.285 Receiving GUI configuration.
9 10:20:16.288 Read timeout error
10 10:20:16.289 Receiving GUI configuration, try 2 .
11 10:20:17.291 Read timeout error
12 10:20:17.291 Receiving GUI configuration, try 3 .
13 10:20:26.440 Read timeout error
14 10:20:26.440 Receiving GUI configuration, try 4 .
15 10:20:26.464 GUI configuration received
16 10:20:26.474 Receiving variables.
17 10:20:31.477 Read timeout error
18 10:20:31.477 Receiving variables, try 2 .
19 10:20:36.481 Read timeout error
20 10:20:36.481 Receiving variables, try 3 .
21 10:20:36.481 Write error: Broken pipe
22 10:20:36.483 Disconnect

На скорости 9600 иногда удается подключиться.
При цикле 2 сек подключиться удавалось на скорости 38400

Источник

A ConnectionError («Read timed out.») is raised instead of ReadTimeout , when using timeout keyword argument of Session.get() #5430

Comments

nlykkei commented Apr 19, 2020 •

Consider the code below (main.py). When a temporary network disconnect occurs without the timeout keyword argument to Session.get() , the client may hang indefinitely and no exception is raised.

However, if I use the timeout keyword argument, the application will raise a ConnectionError from models.py corresponding to the urllib3.exceptions.ReadTimeoutError :

requests.exceptions.ConnectionError: HTTPSConnectionPool(host=’confluence.danskenet.net’, port=443): Read timed out.

Given that the exception is only raised, when using the timeout keyword argument, why isn’t Requests raising ReadTimeout exception instead? In particular, the ConnectionError ‘s exception message «Read timed out.» suggests that it should be a ReadTimeout exception?

To mitigate the issue I’m currently performing a regular expression match on the exception message, which is a bad practice:

main.py:

Exception:

Expected Result

I would expect a requests.exceptions.ReadTimeout to be raised.

Actual Result

A requests.exceptions.ConnectError was raised instead, with the error message: Read timed out.

System Information

This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c).

The text was updated successfully, but these errors were encountered:

wxllow commented Jun 11, 2020

I’d be happy to try and make a pull request on this, but no one has said if that’s just how it’s supposed to work or not, so I’ll wait until then

sigmavirus24 commented Jun 11, 2020

This is behaving exactly as it unfortunately must until a breaking API change can be introduced. The behaviour was likely (I don’t remember this specifically) introduced to raise the same exception as it used to raise before urllib3 introduced finer-grained exceptions. Is that great? No. Is it what will happen until a real requests 3.0 can happen? Almost certainly due to backwards compatibility concerns and API stability.

sigmavirus24 commented Jul 30, 2020

@simonvanderveldt So you’re complaining about exception wrapping which is a user feature to allow them to not have to think about handling exceptions from N libraries being used inside of Requests? That’s tangential to this issue. Please let’s not muddy the water with this conversation

stephanebruckert commented Jul 30, 2020 •

I am following this issue as I also get mixed results when a timeout is raised.

My timeout is set to (1,10) connect/read and when it times out, I can see any of these 2:

HTTPSConnectionPool(host=’cloud-collector.newrelic.com’, port=443): Read timed out. (read timeout=10)

HTTPSConnectionPool(host=’cloud-collector.newrelic.com’, port=443): Read timed out. (read timeout=1)

I imagine the second one should mention connect instead of read , possibly making one believe that the timeouts are not being applied correctly.

Читайте также:  An error has occurred chrome

I am still trying to narrow down the problem before creating a new github issue, but it seems something isn’t too clear in the way exceptions are handled.

Edit: I’m not entirely sure this is the same issue so I created #5544

jakermx commented Feb 5, 2021

try to enable the OS TCP/IP stack tcp keep alive, it is disable by default , ie. for Linux I use this inorder to keep the connection up.

If the socket gets dropped because the server didnt get any tcp keep alive msg while processing your request, you will get a ConectionTimeout or ConnectionReset eror, because, it is from a lower layer than the data tranfer. so if the socket is close. no data will be tranfer.

YashasviBhatt commented Mar 10, 2021 •

Hello @nlykkei By your doubt I assume you’re trying to handle that exception raised by urllib3, if it is, then this is what I have done to handle it.
Instead of using except ReadTimeout as e:
I used except requests.ReadTimeout as e: and it handled just perfectly.
Please let me know if you had the same problem or not and whether you have managed to solve your problem.

jakermx commented Mar 10, 2021

Hello @nlykkei By your doubt I assume you’re trying to handle that exception raised by urllib3, if it is, then this is what I have done to handle it.
Instead of using except ReadTimeout as e:
I used except requests.ReadTimeout as e: and it handled just perfectly.
Please let me know if you had the same problem or not and whether you have managed to solve your problem.

Well,your are correct about the Exception handle. but, the real issue, is that the OS libraries, no matter is your request forces to use persistenr connections by enabling theConnection : Keep-Alive header on HTTP/HTTPS Request. it depends on 2 possible scenarios, 1st, the easy one. the server responds back with a connection close or a the enconding is set too chunk-response. to resolve this just iterate the response till you get Len 0 onthe responseheader.

But At least on my findings, the major issue deepnds on the lower layers, since the content providers dont want to spend resources on «probably» dead connection, they close them before the IETFs RFC standrard,that defines the TCP keep alive behaivor. and set it to start sendind L4 TCP KA, after 2 hours. so it is insane, and obvously will spend resourses. so what I have done is set the SO_KEEPALIVE flag on, and ser the timers lower. so youcan release the connection to the connection pool, and release resourses on your device, when idle.

I dont like catch exceptions and do retries, when they are not neccesary.

but you can retry on your code, dont set the retry param, because you willgetthe same behaivor, sincethe Lower ayer will retry the High Level Request on the same dead connection.

Cheers from Mexico

vikasnavgire commented Jun 16, 2021 •

I faced the same issue, by removing the headers worked for me.

20:21
Traceback (most recent call last):
File ““, line 1, in
File “/opt/domainvalidation/requests/api.py”, line 76, in get
return request(‘get’, url, params=params, **kwargs)
File “/opt/domainvalidation/requests/api.py”, line 61, in request
return session.request(method=method, url=url, **kwargs)
File “/opt/domainvalidation/requests/sessions.py”, line 530, in request
resp = self.send(prep, **send_kwargs)
File “/opt/domainvalidation/requests/sessions.py”, line 643, in send
r = adapter.send(request, **kwargs)
File “/opt/domainvalidation/requests/adapters.py”, line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host=‘www.123xyz.com’, port=443): Read timed out. (read timeout=15)

Not sure the behavior is specific to the server
Note: 123xyz is just for example.

Источник

Оцените статью
toolgir.ru
Adblock
detector