lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] Duplicate packets received?


From: mitchj
Subject: [lwip-users] Duplicate packets received?
Date: Wed, 23 Aug 2017 15:16:09 -0700 (MST)

Hello community, I am testing my implementation and have found that sometimes
I receive more than 1460 bytes at a time, and it appears to be duplicate
data. I know this because I am sending and receiving the data using a simple
python script and the payload has duplicate segments coming  back to it,
initially I thought it was a duplicate send however I think I have
eliminated that possibility. What might cause this glitch?

tcp_recved: received 1460 bytes, wnd 5840 (0).
tcp_recved: received 3568 bytes, wnd 5840 (0). <--- I have a failure when my
receive function grabs more than 1 packet, and it appears duplicate data is
the culprit.

here is my receive function, which simply replaces the in_pb with the
received pbuf if no data is currently in the in_pb and cats in_pb with the
receive pbuf (p) otherwise.

struct EStreamDataSetTCP
{
    uint32_t end_port;
    uint32_t start_port;
    ip_addr_t dest_ip;
    struct tcp_pcb *pcb;
    struct pbuf *out_pb;
    struct pbuf *in_pb;
    err_t err_val;
};

static err_t tcp_client_raw_recv(void *arg, struct tcp_pcb *tpcb, struct
pbuf *p, err_t err)
{
  struct EStreamDataSetTCP *data_set;
  data_set = (struct EStreamDataSetTCP *) arg;

  if (p != NULL) {
    if(data_set->in_pb == NULL || data_set->in_pb->tot_len == 0)
    {
      if(data_set->in_pb != NULL)
      {
        pbuf_free(data_set->in_pb);
      }
      data_set->in_pb = p;
    }else
    {
      pbuf_cat(data_set->in_pb, p);
    }
    tcp_recved(tpcb, p->tot_len);
  }
  data_set->err_val = err;
  return err;
}

Thank you for the help!



--
View this message in context: 
http://lwip.100.n7.nabble.com/Duplicate-packets-received-tp30530.html
Sent from the lwip-users mailing list archive at Nabble.com.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]