Oh No, My XPSP3

February 2nd, 2009
#include <windows.h>
int main()
{
 WCHAR c[1000] = {0};
 memset(c, 'c', 1000);
 SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, (PVOID)c, 0);

 WCHAR b[1000] = {0};
 SystemParametersInfo(SPI_GETDESKWALLPAPER, 1000, (PVOID)b, 0);
 return 0;
}

Two posts ago I talked about vulnerabilities. So here’s some Zero Day. This will crash your system, unless you’re on Vista (which is already immune to it). And why the heck on SP3 we are still having this thing not closed yet?

It might be exploitable, I didn’t research it any further than the BSOD of the security cookie…Maybe on some compilations without /GS it can be easily exploited. Or maybe overriding enough of the stack to trigger an exception could be it.

“Remember to let her into your heart,
Then you can start to make it better” – The Beatles.

Escape

February 1st, 2009

Wanted to share this with the world:

e 0:0 cc
e 100 c4 c4 54 27

Can’t Stand it When…

January 31st, 2009

1) … when people say they write code in Assembler. Now, if that sentence didn’t vibe you, then probably you shouldn’t read any futher. It’s like I will tell someone that I know to code in Compiler. And that’s wrong, you don’t code in compiler, you use a compiler in order to compile your code in whatever language you really write in. So the proper word would be “Assembly”. And I encounter too many people, who knows some Assembly too, that say it incorrectly and it freaks me out. The next thing I reply is “you write in compiler, ohhh wow, very nice”, but they don’t get it.

2) … when you think you’re cool and you don’t use goto’s because most people think it’s a bad habit and yet you do it indirectly and you are cooler now. I will just show some code snippet and say no more than – your code should be readable, not making you a cool haxor guy (well maybe that too), and using goto for cleaning resources is legitimate !!!!

status = success;

do {
  p = (char*)malloc(1000);
  if (p == NULL) {
  status = fail;
   break; // <— oh yeah biatch.
  }
 } while (FALSE); // <— oh no, so lame.
 if (status != success) {
  if (p) free(p);
  if (bla) free(bla);
  return status;
 }

 status = do_more_stuff(…);
 return status;
}

3) … when something wrong happens internally in some function and you don’t bubble up the return code up to the caller and you pretend “business as usual” when something is seriously wrong. Then some guy like me needs to come in and debug the flow control to find out what went wrong.

4) … when you cannot disassemble any address you want in Visual Studio debugger (under Platform Builder) and you need to change the PC (IP on ARM) to whatever value and go to “Show Current Statement” and only then set a breakpoint there and view the Assembly code and then fix back the PC to the original’s value.

Got some more? Share them with us.

NULL, Vulnerabilities and Fuzzing

December 31st, 2008

I remember seeing Ilja at BH07. We talked about Kernel attacks, aka privilege escalation. He told me, also, back then, that he found some holes that he managed to execute code through. I think the platform of target was Windows, although Ilja is specializing in Unix. Back in ’05 already he had a talk about Unix Kernel Auditing. Nothing new probably there, at least for the time being. However, the new approach of fuzzing the kernel, the system calls to be accurate, was pretty new. But feel free to correct me if I’m wrong about it. And it seems Ilja managed to find some holes using fuzzing. (BTW, a much more interesting paper from him about Unusual Bugs.)

Personally, I don’t believe in fuzzing. Usually the holes I find – there is no way a fuzzer will find. Although, I do believe that you need to mix tools/knowledge in order to find holes and audit a software in a better way. It is enough that there is a simple validation of some parameter you pass to a specific potential-hole’y function and all your test can be thrown away because of that validation, though, there is still a weakness in that function, you won’t get to it. Then you say “Ah Uh”, and you think that you can refine the randomness of the parameters you pass to that function and hopefully prevail. Well, it might work, it might not. As I said, I’m not a big fan of fuzzing.

Although, it might be cool to have a tool that analyzes the code of a function and builds the parameters in a special way to make a code coverage of 100% on that function, which is not fuzzing anymore and means: you walk all paths of execution and the chances to find a weakness are so much greater. Writing such a tool is crazyness, and yet possible, if you ask me.

Fuzzing or not, there are still weaknesses in Win32k, which supposed to be one of the most “secured”/audited components in the Kernel. Probably because many researches had their work on it as well. And that’s simply sad.

Speaking about Ilja’s fuzzing of kernel and stuff, and thinking we are cool to find weaknesses nowadays, Mark Russinovich wrote NTCrash back in ’96 for god sake and, it was a Fuzzer(!), but back then nobody called it or knew about fuzzers. And NTCrash as simple as it is, found some weaknesses in kernel system calls of NT4 ;) Respect (though today it won’t even scratch the kernel, so we might think ourselves cool for still finding stuff :) ).

A friend and I are trying to audit another application, and my friend found some NULL dereference which crashes that software. So we fired up Olly and tried to see what’s going on. It seems that some interface is queried and returns a successful code value and at the same time we get NULL for that interface, which means something is really f*cked up there. Thing is, as you probably can imagine for yourself, we want to execute code out of it. But odds seem to be against us at this time, since we can’t control that NULL or anything about it.

I then wanted to see what people have done with NULL before, how to exploit it better. And usually 99% of the applications running out there don’t have page 0 mapped to their address space. But CSRSS and NTVDM for instance, do have it mapped, but who cares now…? It doesn’t help our cause. Besides, you probably can’t control that page 0 and its data anyway. So I encountered that Flash Exploitation. To be honest, I didn’t read all of the white paper about the exploitation, I only looked for how the arbitary data write worked. And it seems that some CALLOC had failed to allocate memory because of an integer overflow weakness and from there you got a NULL pointer to begin with. But Flash didn’t access that pointer immediately – it had some pointer arithmetic added to it. And you guessed it right, you can control some offset before the pointer is really accessed, thus you can write (almost) anywhere you want. Now I really don’t underestimate the exploitation, from the bits I read it is a crazy and very beautiful exploitation. But to say that it is a new technique and a new class of exploitation is one thing that I really don’t agree to. You know what, looking at it in a different light – it was probably not leading to a code execution if that CALLOC not returned NULL, because then you won’t know where you are on the heap and you couldn’t really write to anywhere you knew accurately. And besides, the NULL wasn’t dererferenced directly and an offset was added to it (no matter what the calculation was for the sake of conversation), so therefore I don’t see it so exciting if you ask me (again, not the exploitation but the “new class of exploitation”). Still you should check it out :)

So, as I saw that no one did anything really useful with a real NULL dereference, it seems that the weakness he found is only a DoS, but maybe we can control something there, yet to be researched…

String Initialization is Tricksy

December 29th, 2008

A friend of mine had to hand in an assignment for Computer Science in university. As I understood, it was a relatively easy assignment. And the point is that that friend is very experienced programmer and knows a thing or two about it. Anyway, he had a line in the code which goes like this:
char buf[1024] = “abc”;

You don’t even need to know C in order to understand that line, right? I assume we all agree to that. It simply initializes the buffer with a constant string literal. So his lecturer asked him, what does this line do precisely. And to his surprise his answer was incorrect. The correct answer is that the whole buffer is initialized and then the string constant is copied (this can be done in a few ways, for example copying a buffer with the zeros at the end of it). So today another friend called me on the phone to ask about this thing, why our first friend was wrong about it. Now as a reverser, I suppose I need to know the answer to such a simple matter as well. But, the sad part was that I was wrong as well as the two of them. I fired up the C standard and started to search for the solution. I wanted al iving proof to the matter at hand. Looking here and there it took me around 15 mins to lie my hands on the piece of sentence that settled all that matter down. And I quote:

“If there are fewer initializers in a brace-enclosed list than there are elements or members of an aggregate, or fewer characters in a string literal used to initialize an array of known size than there are elements in the array, the remainder of the aggregate shall be initialized implicitly the same as objects that have static storage duration.”

The underlined text is the answer – If there are less characters than the size of the array to initialize, the remainder has to be initialized as well. There is another clause which explains how the initialization is being done, but for now, let it be ‘zeroing’.

Now the reason I was wrong about it is because I happened to see many (for example):
mov [buf+1], ‘a’
mov [buf+2], ‘b’
mov [buf+3], ‘c’
mov [buf+4], ‘\0’

in lots of functions, and that means the source C code is:

char buf[] = “abc”;

The standard says about this case that the size of the buffer is to be acquired from the size of the literal constant string, don’t forget the null termination character as well. So that’s why I didn’t see the memset coming in to initialize the all buffer. Besides, maybe most of the people code it this way:
char buf[1024];
strcpy(buf, “abc”);

Which doesn’t lead to a memset or other way of initialization of the rest of the array.

Instructions’ Prefixes Hell

December 21st, 2008

Since the first day diStorm was out people didn’t know how to deal with the fact that I drop(ignore) some prefixes. It seems that dropping unused prefixes isn’t such a great feature for many people and it only complicates the scanning of streams. Therefore I am thinking about removing the whole mechanism, or maybe change it in a way that still preserves the same interface but behaves differently.

For the following stream: “67 50”, the result by diStorm will be: “db 0x67” – “push eax”. The 0x67 prefix supposes to change the address size, which none is used in our case, thus it’s dropped. However, if we look at the hex code of the “push eax” part we will see “67 50”. And this is where most of the people become dumbfounded. Getting twice the same prefix-byte of the stream in two results is in a way confusing. Taking a look at other disassemblers will tell you that diStorm is not the only one to do such games with prefixes. Sometimes I get emails regarding this “impossible” prefix – since it gets to be output twice, which is wrong, right? Well, don’t know, it depends how you choose to decode it. The way I chose to decode prefixes was really advanced, each prefix could have been ignored, unless it has really affected (one of) the operand itself. I had to really keep tracking on each prefix and know whether it affected any operands in the instructions and only then I examined which prefixes I drop or not. This all sounds right in a way. Hey, at least for me.

However, we didn’t even talk about what you will do if you have multiple prefixes of the same family (segment-overide: DS, ES, SS, etc). Now this one is really up to interpretations of the designer. Probably the way I did it in diStorm is wrong, I admit it, that’s why I want to rewrite the whole prefixes thing from the beginning. There are 4 or 5 types of prefixes and according to the specs (Intel/AMD) I quote: “A single instruction should include a maximum of one prefix from each of the five groups.” …. “The result of using multiple prefixes from a single group is unpredictable.”. This pretty much sums all the problems in the world related to prefixes. I guess you can see for yourself from these 2 lines you can actually treat them in many different ways. We know now that it can lead to “unpredictable” results if you have many prefixes – in reality it won’t shut down your CPU, it won’t even throw an exception. So screw it you say, and you’re right. Now let’s see some CPU (16 bits) logic for decoding the prefixes:

while (prefix byte is read) {
 switch (prefix): {
  case seg_cs: use_seg = cs; break;
  case seg_ds: use_seg = ds; break;
  case seg_ss: use_seg = ss; break;
  ….
  ….
 case op_size: op_size = 32; break;
  case op_addr: op_addr = 32; break;
 case rep_z: rep = z; break;
 …
 }
 – skip byte in stream –
}

The processor will use those flags in order to know which prefix was presented or not. The thing about using a loop (in any form) is that now that you have to show text out of some streams with many prefixes, you don’t know whether the processor really uses the first occurrance of the prefix or its last, or maybe both? And maybe Intel and AMD implement it differently?

You know what? Why the heck do I bother so much with some minor end cases that never really happen in real code sections. I ask myself too, maybe I shouldn’t. Although I happened to see for myself some malware code that tries to screw up the disassembler with many extra prefixes, etc.. and I thought diStorm could help malware analyzers as well with advanced prefixes decoding.

Anyways, according to the above logic code I’m supposed to use the last prefix of each type. Given a stream such as: 66 66 67 67 40. I will get:
0: 66 (dropped)
2: 67 (dropped)
1: 66 67 40
Now you can see that the prefixes used are the second and the fourth and that the instruction starts at the second byte on the stream. Now I officially can commit a suicide, even I can’t follow these addresses, it’s hell. So any better solution?

Welcome Back

December 20th, 2008

Hey you guys again, I’m back from South East Asia after 3 months of traveling all around. Was awesome :)

So here’s some potentially cool real story: What happened is that while I was walking with a few friends in Vietnam (Nha Trang to be accurate) on the beach a friend found a pouch with credit cards and driving license, etc. The only thing we knew about that pouch was the owner’s name and that she was Irish. That didn’t really help us to get to her, unforetunately no cellphone number was attached anywhere in the pouch. The next thing we thought was to look her up on FaceBook, but she wasn’t listed (who doesn’t have FB nowadays? :) ). So, we had to give it to the Vietnamese local police station, but probably that poor girl continued traveling and didn’t find it…

 Anyways, I just realized something very nice, suppose you have somebody’s email. Whether someone left a comment with only his email on this blog, or whatever. And you wish to find that email or who he/she is. So usually we fire up google and looking for that email and we can learn much from that. But sometimes we can’t find anything. And besides, even if we do find something, it might not be relevant or enough information about that person. What I realized was that you can search people using their email in FaceBook, and I really managed to find a few people who were anonymous except their emails, which is quite interesting….Finally we got some way to link a person with an email address, think about it.

So that’s it, I’m back for a couple of months, hopefully I will write some interesting posts, need to get ideas, which usually are originated from my work, stay tuned ;)

Software That Uses diStorm

August 24th, 2008

After a few years that diStorm is out, we can already see it used here and there. Although most users are private users rather than commercial, but even commercial applications use diStorm. I guess many people also use it internally in their companies, but without their word I can’t really know about it. Except some friends who tell me so.

It’s pretty cool that you write something useful which people actually use, and to save commercial use. That was my main reason to release diStorm under the permissive BSD license. The problem arises when there is some commercial applications which don’t give credit for your work, it’s really frustrating and I guess one can’t do much about it. There is this Vietnamic BKV Pro anti virus software that claimed to be written by professors and students (or the like), so I didn’t really expect no credit from such people. But this is our world :( I got an email from an advocate about diStorm’s copyright infrigement. It seems they also abuse WinRAR’s license, so I’m not the only one.. To be honest, I prefer they stop using diStorm immediately rather than not giving me my credit. There are other disassembler libraries out there, they could use them as well. On the other hand, I’m happy to know they use diStorm, but I only ask for recognition, nothing else, after all the hard work I put there. I emailed them but to no response. This licenses’ violation from the AV guys seem to make a lot of noise in Vietnam blogs and forums, though I can’t really understand anything, except where they quote diStorm’s license or saying my name. I haven’t yet contacted OSI, and I’m not sure if they can really help, but it’s worth the try.

Anyway, there are good people who does give credit and I decided it’s about time I will show a small list of users. The first one though goes to a good friend I met through diStorm, who reported many bugs and helped in testing the 64bits environment (do not confuse with AMD64) support, Sanjay Patel. He works(/founder) at RotateRight.com which released last month their Zoom product, which is a very smart Profiler, currently only for Linux though. The product is free for 30 days trial version, you should check it out, it seems to be very promising, because I know more guys behind this product, although I haven’t tested it myself. But hey, it uses diStorm :)

More products which use diStorm:

Apple Shark Profiler

SolidShield – server side protector

DFSee – Low Level disk tools

And some open source projects:

Python-ptrace

Crypto Implementations Analysis Toolkit

Well, that’s what I’m aware about at least, I believe there are more though.

Have fun :)

Proxy Functions – The Right Way

August 21st, 2008

As much as I am an Assembly freak, I try to avoid it whenever possible. It’s just something like “pick the right language for your project” and don’t use overqualified stuff. Actually, in the beginning, when I started my patch on the IPhone, I compiled a simple stub for my proxy and then fixed it manually and only then used that code for the patch. Just to be sure about something here – a proxy function is a function that gets called instead of the original function, and then when the control belongs to the proxy function it might call the original function or not.

The way most people do this proxy function technique is using detour patching, which simply means, that we patch the first instruction (or a few, depends on the architecture) and change it to branch into our code. Now mind you that I’m messing with ARM here – iphone… However, the most important difference is that the return address of a function is stored on a register rather than in the stack, which if you’re not used to it – will get you confused easily and experiencing some crashes.

So suppose my target function begins with something like:

SUB SP, SP, #4
STMFD SP!, {R4-R7,LR}
ADD R7, SP, #0xC

This prologue is very equivalent to push ebp; mov ebp, esp thing on x86, plus storing a few registers so we can change their values without harming the caller, of course. And the last thing, we also store LR (link-register), the register which stores the return address of the caller.

Anyhow, in my case, I override (detour) the first instruction to branch into my code, wherever it is. Therefore, in order my proxy function to continue execution on the original function, I have to somehow emulate that overriden instruction and only then continue from the next instruction as if the original patched function wasn’t touched. Although, there are rare times when you cannot override some specific instructions, but then it means you only have to work harder and change the way your detour works (instructions that use the program counter as an operand or branches, etc).

Since the return address of the caller is stored onto a register, we can’t override the first instruction with a branch-link (‘call’ equivalent on x86). Because then we would have lost the original caller’s return address. Give it a thought for a second, it’s confusing in the first time, I know. Just an interesting point to note that it so happens that if there’s a function which don’t call internally to other functions, it doesn’t have to store LR on the stack and later pop the PC (program-counter, IP register) off the stack, because nobody touched that register, unless the function needs around 14 registers for optimizations, instead of using local stack variables… This way you can tell which of the functions are leaves on the call graph, although it is not guaranteed.

Once we understand how the ARM architecture works we can move on. However, I have to mention that the 4 first parameters are passed on registers (R0 to R3) and the rest on the stack, so in the proxy we will have to treat the parameters accordingly. The good thing is that this ABI (Application-Binary-Interface) is something known to the compiler (LLVM with GCC front-end in my case), so you don’t have to worry about it, unless you manually write the proxy function yourself.

My proxy function can be written fully in C, although it’s possible to use C++ as well, but then you can’t use all features…

int foo(int a, int b)
{
 if (a == 1000) b /= 2;
}

That’s my sample foo proxy function, which doesn’t do anything useful nor interesting, but usually in proxies, we want to change the arguments, before moving on to the original function.

Once it is compiled, we can rip the code from the object or executable file, doesn’t really matter, and put it inside our patched file, but we are still missing the glue code. The glue code is a sequence of manually crafted instructions that will allow you to use your C code within the rest of the binary file. And to be honest, this is what I really wanted to avoid in first place. Of course, you say, “but you could write it once and then copy paste that glue code and voila”. So in a way you’re right, I can do it. But it’s bothersome and takes too much time, even that simple copy paste. And besides it is enough that you have one or more data objects stored following your function that you have to relocate all the references to them. For instance, you might have a string that you use in the proxy function. Now the way ARM works it is all get compiled as PIC (Position-Independent-Code) for the good and bad of it, probably the good of it, in our case. But then if you want to put your glue code inside the function and before the string itself, you will have to change the offset from the current PC register to the string… Sometimes it’s just easier to see some code:

stmfd sp!, {lr} 
mov r0, #0
add r0, pc, r0
bl _strlen
ldmfd sp! {pc}
db “this function returns my length :)”, 0

 When you read the current PC, you get that current instruction’s address + 8, because of the way the pipeline works in ARM. So that’s why the offset to the string is 0. Trying to put another instruction at the end of the function, for the sake of glue code, you will have to change the offset to 4. This really gets complicated if you have more than one resource to read. Even 32 bits values are stored after the end of the function, rather than in the operand of the instruction itself, as we know it on the x86.

So to complete our proxy code in C, it will have to be:

int foo(int a, int b)
{

 int (*orig_code)(int, int) = (int (*)(int, int))<addr of orig_foo + 4>; 
// +4 = We skip the first instruction which branches into this code!
 if (a == 1000) b /= 2;
// Emulate the real instruction we overrode, so stack is balanced before we continue with original function.
 asm(“sub sp, sp, #4”);
 return orig_foo(a, b);
}

This code looks more complete than before but contains a potential bug, can you spot it? Ok, I will give you a hint, if you were to use this code for x86, it would blow, though for ARM it would work well to some extent.

The bug lies in the number of arguments the original function receives. And since on ARM, only the 5th argument is passed through the stack, our “sub sp, sp, #4” will make some things go wrong. The stack of the original function should be as if it were running without we touched that function. This means that we want to push the arguments on the stack, ONLY then, do the stack fix by 4, and afterwards branch to the second instruction of the original function. Sounds good, but this is not possible in C. :( cause it means we have to run ‘user-defined’ code between the ‘pushing-arguments’ phase and the ‘calling-function’ phase. Which is actually not possible in any language I’m aware of. Correct me if I’m wrong though. So my next sentence is going to be “except Assembly”. Saved again ;)

Since I don’t want to dirty my hands with editing the binary of my new proxy function after I compile it, we have to fix that problem I just desribed above. This is the way to do it, ladies and gentlemen:

int foo(int a, int b)
{
 if (a == 1000) b /= 2;
 return orig_foo(a, b);
}

void __attribute__((naked)) orig_foo(int a, int b)
{
// Emulate the real instruction we overrode, so stack is balanced before we continue with original function.
 asm(“sub sp, sp, #4\nldr r12, [pc]\n bx r12\n.long <FOO ADDR + 4>”);
}

The code simply fixes the stack, reads the address of the original absolute foo address, again skipping the first instruction, and branches into that code. Though, it won’t change the return address in LR, therefore when the original function is over, it will return straight to the caller of orig_foo, which is our proxy function, that way we can still control the return values, if we wish to do so.

We had to use the naked attribute (__declspec(naked) in VC) so that the compiler won’t put a prologue that will unbalance our stack again. In any way the epilogue wouldn’t get to run…

This technique will work on x86 the same way, though for branching into an absolute address, one should use: push <addr>; ret.

In the bottom line, I don’t mind to pay the price for a few code lines in Assembly, that’s perfectly ok with me. The problem was that I had to edit the binary after compilation in order to fix it so it’s becoming ready to be put in the original binary as a patch. Besides, the Assembly code is a must, if you wish to compile it without further a do, and as long as the first instruction of the function hasn’t changed, your code is good to go.

This code works well and just as I really wanted, so I thought so share it with you guys, for a better “infrastructure” to make proxy function patches.

However, it could have been perfect if the compiler would have stored the functions in the same order you write them in the source code, thus the first instruction of the block would be the first instruction you have to run. Now you might need to add another branch in the beginning of the code so it skips the non-entry code. This is really compiler dependent. GCC seems to be the best in preserving the functions’ order. VC and LLVM are more problematic when optimizations are enabled. I believe I will cover this topic in the future.

One last thing, if you use -O3, or functions inline, the orig_foo naked function gets to be part of the foo function, and then the way we assume the original function returns to our foo proxy function, won’t happen. So just be sure to peek at the code so everything is fine ;)

x86 Instruction Set Wars

August 14th, 2008

It all back in the 90’s where Intel came up with the dashing MMX technology. AMD wasn’t so late to respond with its new 3DNow! instruction set. That if you ask me, was much less popular and less used. But the good thing about 3DNow! is that it handled floating points rather than integers. And then came SSE instruction set, and the world got better and nowadays compilers even use it up to SSE2 while knowing it will be there when the code runs for 99% today. However, they still make a SSE test to be sure they can use it, I believe this check will always stay in code for assurance anyway, can’t harm, you know.

Now almost 10 years later, we see another split in technologies, Intel came up with SSE4, I already talked about it in an earlier post, which contained really valuable instructions. And then, guess what? AMD added several instruction on top of Intel’s.

In August 07 AMD announced a new set: SSE5. Aren’t we sick off SSE anymore??? Anyway, in April this year, Intel announced in response a new AVX instruction set (Advanced Vector Extensions).

This doesn’t go anywhere. Every company in its turn announces a new instruction set. The first company doesn’t support the other’s and vice versa. This is just going wrong and it’s our nightmare. Basically, most developers should not care anyway, they don’t use these sets, and they (partly) are not out officially (means you can’t use it as for now). Therefore, the game matters for compiler developers and those who write the tools that mess with machine code, etc. Probably many codecs and crypto algos will use it too directly if it exists on the processor they get to run on…

The thing is, I decided to take a side, and to stick only to AVX, and therefore I won’t support SSE5 in diStorm. If anyone from the community is gonna help with that, I will gladly accept it, don’t get me wrong. Since I don’t have much time for it anyway, and I hate this mess up, I will stick to Intel. I don’t know if it’s a good or bad news for diStorm, but as the only owner and the way I see it, I gotta do something against this lack of standard thing. Everything has standards today, why can’t the damned x86 have one as well?

I’m not the only one who speaks about this, Agner Fog has also talked about it here. The difference is that I can make my own small change.

What do you think should happen with this issue??? Who’s right, Intel or AMD? Should developers code twice now for each instruction set?

arrrrgh so many questions