Jump to content

[V] Script/Native Documentation and Research


Recommended Posts

I do backup the reference every month. This one is from 10 days ago: http://www.mediafire.com/file/04awazwtpv8mt3i/reference.html

In the name of humanity, THANKS.

 

 

 

This documentation tool uses a single Markdown file for its source. Formatting requires some markdown knowledge.

 

The idea would be to make changes to the common source repo file, do a pull request and then one of a team of people can approve. A hook would be set up to automatically update when Master is updated. If the site goes down, the repo is unaffected.

 

Couple questions:

 

  • Is Native DB too unstable to continue to work with?
  • Is using git too high a barrier for making casual contributions to this?
  • I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

 

I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.

 

So I am interested in feedback.

 

Than you too, for this project.

 

>Is Native DB too unstable to continue to work with?

Looks like. Its very good at its job, but bots are always screwing it. Is the main source of my native-handling knowedge, so I need it badly.

 

>Is using git too high a barrier for making casual contributions to this?

No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

 

>I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

 

>I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.
Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.
Edited by Eddlm

In reference to my previous post, Testing version of new native documentation site is here: http://138.68.41.182/

 

>No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable? I could make changes to allow a collapsing behavior if that is what is meaningful.

 

>I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

The current version doesn't let you make changes, it has to be recompiled from a markdown file to work. But I would still appreciate feedback on the format of the ones I stubbed out there. I need to know it makes sense before I invest more time in this.

 

>Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.

Okay. I think a test of this group-managed git project where several-to-many people can approve pull requests that cause the docs to auto-regen (update) might be a good step.

 

To move forward on this I think we need:

1. Clarity on how to format gta Native documentation.

2. A script to parse an existing reference into the agreed on format

3. A github project containing the source markdown file, with users from this thread able to approve pull requests

4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.

Edited by jfoster

 

I would still appreciate feedback on the format of the ones I stubbed out there.

I need to know it makes sense before I invest more time in this.

 

 

I like the current design, but I would make it so the formatted template (currently at the right side) are located at the center too, under the native name and description.

 

So it looks like:

 

 

GET_PLAYER_PED
This native allows you to select a given player’s caharacter.
Ped GET_PLAYER_PED(Player player) // 43A66C31C68491C0 6E31E993

 

 

Basically the current design, moving the dark part under each native.

Edited by Eddlm

Okay, I will make the edits and do a regen for feedback. I want to make sure I get it right before working on a script for the mass export.

Basically the current design, moving the dark part under each native.

 

 

 

 

I believe the third pane of these kind of documentation projects is for code examples. Are you aware of people posting code usage examples of natives? / Would you add code examples if you were adding to or updating native docs?

Edited by jfoster

I've been using the NativeDB and other resources (like this site) for experimenting with modding. I agree that there is a need for this documentation project. The NativeDB is a very useful resource, but the lack of security makes it frustrating to use. Unfortunately, Fireboyd78 said that Alexander Blade isn't really concerned about implementing security.

 

 

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable? I could make changes to allow a collapsing behavior if that is what is meaningful.

What I find nice about NativeDB's implementation (even though I've never actually edited it) is that the wiki-like style makes it easy to edit an individual function. Also, the expand/collapse mechanism allows you to see the all of the function names at a glance. At the same time, you can choose to expand just the functions that you're working with so you can easily refer to them while you're working.

 

 

To move forward on this I think we need:

1. Clarity on how to format gta Native documentation.

2. A script to parse an existing reference into the agreed on format

3. A github project containing the source markdown file, with users from this thread able to approve pull requests

4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.

So if I get this straight, the new procedure would be:

  • Clone the Markdown file on GitHub
  • Edit the file
  • Commit the changes
  • Submit a pull request
  • An authorized team member approves the changes
  • The website gets regenerated from the Markdown file
IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file. I'd suggest at least breaking it down by namespace. That would break it into 42 files. It might also make it easier to manage pull requests if multiple people are submitting changes.

 

Have you thought about making it a wiki? There are many available implementations. You wouldn't have to worry about regenerating the website from the source (since it's built in), and users would be able to edit directly in their browsers. As well, there would be a running history for every page, so it would be easy to see who changed something, and easy to revert changes. Even if something gets messed up, anyone can go back to look at a previous version of the page. You could probably set it up such that users need to be approved in order to be able edit, or maybe even that each edit needs to be approved (but that's generally not necessary). You'd then be more concerned with managing users, in that you'd be trying to prevent edit wars and kicking out any vandals that somehow got authorized (or maybe hacked an account).

Edited by O-Deka-K

> IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error. Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough. I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

> Have you thought about making it a wiki?

 

It does seem that simple editing and authentication offered by a wiki would make the most sense. The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki. It looks like there are a lot of methods to do this with MediaWiki.

unknown modder

> IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error. Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough. I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

> Have you thought about making it a wiki?

 

It does seem that simple editing and authentication offered by a wiki would make the most sense. The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki. It looks like there are a lot of methods to do this with MediaWiki.

You'd need somethign for json, which is what nativeDB uses

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~14.8 Millions hashes per second
Bench on GTX 1080 : ~33 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.
Edited by Transmet
unknown modder

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

Edited by unknown modder

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second

Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

 

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".

 

Now the longest being the sort of false positives.

 

 

It is still surely very unstable but is experimental.

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7
the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

 

I think we need to define the length of the hash, otherwise these results are meaningless.

unknown modder
I think we need to define the length of the hash, otherwise these results are meaningless.

 

minmaxheightrangeoffsetpositionmultiplierdragtestefsdfsdfsghdfsdafefdfaferwfsdafasdsdfsadfsdfsdfdasdasdasdasaweaeweqwsadasdcweeefsfsfseretsertsfsfsersers

all permutations of those up to a max word count of 7

mockba.the.borg

I thought you cannot really verify the validity of a native's name by hashing it.

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!

I thought you cannot really verify the validity of a native's name by hashing it.

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!

The console version of the game used joaat hashes of the actual native names, instead of the randomized hashes that the PC version uses. So while we don't know the real hashes of the natives added after the PC release, we know most of the hashes for the old natives. Then it's just a matter of finding a name that matches the hash, and fits with what the native actually does.

  • Like 1

 

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

 

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
Anyway excuse us for the off topic.
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.
Edited by Transmet
unknown modder

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
Anyway excuse us for the off topic.
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.

The main point i was getting as is you aren't taking advantage of the fact thats its internal state over each iteration only needs to be calculated once for a given substring. For example you could calculate the hash for

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_A

and

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_B

by calculating its state up the the last letter, then reusing that for both the hashes.

inline uint32_t joaat_state_only(const char* key, uint32_t previous_state){    while (*key){        previous_state += *key++;        previous_state += previous_state << 10;        previous_state ^= previous_state >> 6;    }    return previous_state;}inline uint32_t joaat_finish(uint32_t current_state){    current_state += current_state << 3;    current_state ^= current_state >> 11;    return current_state + (current_state << 15);}auto state = joaat_state_only("A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_", 0);auto hashA = joaat_finish(joaat_state_only("A", state));auto hashB = joaat_finish(joaat_state_only("B", state));

These could then be chained while resulting in needless recalculation

Edited by unknown modder
That's right, and I thought about it, but the main thing for me was to be able to get a lot of collisions, just enough to make lists full.

I did not take the time to optimize, moreover your idea would not work with a random generation as I chose.

I have tried several models of algorithms but one with more relevant collisions is the one I chose in the final.



The essential being not to have as many collisions as possible, but collisions more interesting.



More, the biggest current weak point in my code which is even more important that your optimization is well the recurrent use of the global memory GPU which slows down enormously especially for checking hashs. :lol:


Edited by Transmet
  • 2 weeks later...
TheMuggles

_0x92F0DA1E27DB96DC

Renamed to _SET_NOTIFICATION_BACKGROUND_COLOR

Parameters: [p1] - int colour

 

Changes the background colour of a map notification, using colour indexes:

https://gyazo.com/68bd384455fceb0a85a8729e48216e15

  • Like 3
unknown modder

Dude, I always wonder how do you make this kind of stuff...

It's related to game HEX codes ? You're really great, I want to learn this kind of reverse engineer, but I have no clue how to start!

 

Keep the awesome...

it involves disassembling the games unpacked executable, but its not something that can be learnt overnight

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • 1 User Currently Viewing
    0 members, 0 Anonymous, 1 Guest

×
×
  • Create New...

Important Information

By using GTAForums.com, you agree to our Terms of Use and Privacy Policy.