从一个非典型的内存越界访问问题看Linux的进程内存布局

新浪微博 QQ空间

这篇文章想以一个内存越界问题分析过程来说明进程的内存布局。问题有点巧合,程序刚好没有出现segment fault,而是继续在运行,却出现了很诡异的结果。

实例说明:编写一个对Linux消息队列的测试程序,同时提供收发程序,接收程序使用NOWAIT的方式来接收,发送端每隔一段时间发送一个消息。接收端和发送端都作一个操作次数统计,接收端的读取间隔时间设置较短,因此存在消息队列为空的情况,在此情况下,只做计数,休眠较短时间后继续下一次读取。

测试出问题程序的代码如下:

#ifndef _MSG__H__
#define _MSG__H__
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>

#define BODY_LEN 1024

typedef struct _testmsg
{
long type;
char msgbody[1];
} testmsg, *ptestmsg;

#define MSG_TYPE 0xFF
#define MSGS_LEN (sizeof(long) + sizeof(char) * BODY_LEN)

// 一般使用ftok函数获取key_t,这里简单起见,直接定义一个键值。
#define MSG_KEY (key_t)0x320310F2

int open_queue()
{
// 获取queue的ID,如果不存在则创建queue。
return msgget(MSG_KEY, IPC_CREAT);
}

#endif
#include <unistd.h>
#include <stdio.h>
#include <errno.h>
#include "msg.h"

void do_statistic(void);
unsigned char rcvbuf[MSGS_LEN] = {0};

int main(int argc, char** argv)
{
int msqid = open_queue();
if (msqid == -1)
{
printf("error when opened the queue!\n");
return -1;
}

while (1)
{
int ret = msgrcv(msqid, rcvbuf, MSGS_LEN, MSG_TYPE, IPC_NOWAIT);
do_statistic();
if (ENOMSG == errno || EAGAIN == errno)
{
usleep(12000);
continue;
}

if (ret == -1)
{
printf("msgrcv failed!\n");
}

usleep(12030);
}

return 0;
}

void do_statistic(void)
{
static int msgcount = 0;
msgcount++;
printf("do recived msg %d times.\n", msgcount);
}
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include "msg.h"

unsigned char send_msg[MSGS_LEN] = {0};
unsigned char* some_array[1024] = {0};

int main(int argc, char** argv)
{
int msqid = open_queue();
if (msqid == -1)
{
printf("error when opened the queue!\n");
return -1;
}

int i = 0;
while (1)
{
ptestmsg p = (ptestmsg)send_msg;
p->type = MSG_TYPE;
memset(p->msgbody, 0, BODY_LEN);
int ret = msgsnd(msqid, send_msg, MSGS_LEN, 0);

if (-1 == ret)
{
printf("error when sent the msg\n");
return -1;
}
printf("sent %d msges.\n", ++i);
usleep(500000);
}

return 0;
}
Makefile
all:clean msgrcv msgsend
msgrcv:
gcc -g testrcvmain.c -o msgrcv

msgsend:
gcc -g testsendmain.c -o msgsend

clean:
rm ./msgrcv ./msgsend -rf
程序运行结果
 
发送端的打印:
[root@Shentar ~/myprogs/c/msgrcv/msgrcv]# ./msgsend 
sent 1 msges.
sent 2 msges.
sent 3 msges.
sent 4 msges.
sent 5 msges.
sent 6 msges.
sent 7 msges.
sent 8 msges.
sent 9 msges.
sent 10 msges.
sent 11 msges.
sent 12 msges.
sent 13 msges.
sent 14 msges.
sent 15 msges.
sent 16 msges.
sent 17 msges.
sent 18 msges.
sent 19 msges.
sent 20 msges.
sent 21 msges.
sent 22 msges.
sent 23 msges.
接收端的打印:
[root@Shentar ~/myprogs/c/msgrcv/msgrcv]# ./msgrcv
do recived msg 1 times.
do recived msg 1 times.
do recived msg 1 times.
do recived msg 1 times.
do recived msg 1 times.
do recived msg 1 times.
do recived msg 2 times.
do recived msg 3 times.
do recived msg 4 times.
do recived msg 5 times.
do recived msg 6 times.
do recived msg 7 times.
do recived msg 8 times.
do recived msg 9 times.
do recived msg 10 times.
do recived msg 11 times.
do recived msg 12 times.
do recived msg 13 times.
do recived msg 14 times.
do recived msg 15 times.
do recived msg 16 times.
do recived msg 17 times.
do recived msg 18 times.
do recived msg 19 times.
do recived msg 20 times.
do recived msg 21 times.
do recived msg 22 times.
do recived msg 23 times.
do recived msg 24 times.
do recived msg 25 times.
do recived msg 26 times.
do recived msg 27 times.
do recived msg 28 times.
do recived msg 29 times.
do recived msg 30 times.
do recived msg 31 times.
do recived msg 32 times.
do recived msg 33 times.
do recived msg 1 times.
do recived msg 2 times.
do recived msg 3 times.
do recived msg 4 times.
do recived msg 5 times.
do recived msg 6 times.
do recived msg 7 times.
do recived msg 8 times.

预期发送端和接收端的操作计数都会不停增长才对,但是不知道什么原因,接收端的数据会定期的被清零,又从1开始计数。由于实际代码远比这个测试程序复杂,因为问题分析了很久,不得其解。经过各种针对接收程序中的统计函数进行打点和修改,最后发现好像是msgcount变量被覆盖了,被周期性的重新赋值为0。想到了内存越界,分析struct _testmsg结构,总感觉怪怪的,于是找来了msgsnd和msgrcv两个函数的描述:

int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg);

ssize_t msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp,
int msgflg);
struct msgbuf {
long mtype; /* message type, must be > 0 */
char mtext[1]; /* message data */
};

在对比程序中的使用方法,发现msglen的长度参数使用错误,两个调用的这个参数都用错了。msglen参数应该是mtext变长部分的长度,不应该包括long型的mtype的长度。这样发送端和接收端都越界使用内存了,发送端多发了4个字节的long的长度。这部分字节的内容是无效的。接收区在接收时将多出的内容越界写到其后的内存地址上了。但是为什么这么巧的正好覆盖了局部静态变量msgcount的内容呢?

找来了Linux进程内存布局图,问题就非常明朗了:

Linux进程内存布局图

如上图,各个段的解释已经非常清晰了,针对本问题,初始化的静态变量放在Data段。msgcount虽为局部静态变量,但局部静态变量仅仅是名字空间在函数内而已。因此msgcount与全局静态变量rcvbuf一样,也在Data段内,并且二者地址空间是连续的,msgcount正好占据了rcvbuf后的4个字节,因此msgcount是被接收消息时写入消息到rcvbuf时覆盖了。发送端越界发过来的内容正好也是初始化为0的部分,因此msgcount周期性的被覆盖为0,导致计数周期性的从1开始。进一步的修改代码很容易验证如上过程。

如果对Linux进程的内存布局非常熟悉的话,那么这个问题的分析也许会轻松很多。

新浪微博 QQ空间

| 1 分2 分3 分4 分5 分 (4.83- 6票) Loading ... Loading ... | 这篇文章归档在:C/C++, 语言基础 | 标签: , , , , , , , . | 永久链接:链接 | 评论(1) |

1条评论

  1. 评论于 一月 7, 2014 at 14:29:40 CST | 评论链接

    博主您好,非常高兴的通知您,您在独立博客大全的申请已经通过,请查看,同时真诚欢迎你加入博主交流QQ群:285496383

评论

邮箱地址不会被泄露, 标记为 * 的项目必填。

8 - 2 = *



You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <img alt="" src="" class=""> <pre class=""> <q cite=""> <s> <strike> <strong>

返回顶部