Quantcast
Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
    1. Welcome to GTAForums!

    1. Red Dead Redemption 2

      1. Gameplay
      2. Missions
      3. Help & Support
    2. Red Dead Online

      1. Gameplay
      2. Find Lobbies & Outlaws
      3. Help & Support
    1. Crews & Posses

      1. Recruitment
    2. Events

    1. GTA Online

      1. DLC
      2. Find Lobbies & Players
      3. Guides & Strategies
      4. Vehicles
      5. Content Creator
      6. Help & Support
    2. Grand Theft Auto Series

    3. GTA 6

    4. GTA V

      1. PC
      2. Guides & Strategies
      3. Help & Support
    5. GTA IV

      1. Episodes from Liberty City
      2. Multiplayer
      3. Guides & Strategies
      4. Help & Support
      5. GTA Mods
    6. GTA Chinatown Wars

    7. GTA Vice City Stories

    8. GTA Liberty City Stories

    9. GTA San Andreas

      1. Guides & Strategies
      2. Help & Support
      3. GTA Mods
    10. GTA Vice City

      1. Guides & Strategies
      2. Help & Support
      3. GTA Mods
    11. GTA III

      1. Guides & Strategies
      2. Help & Support
      3. GTA Mods
    12. Top Down Games

      1. GTA Advance
      2. GTA 2
      3. GTA
    13. Wiki

      1. Merchandising
    1. GTA Modding

      1. GTA V
      2. GTA IV
      3. GTA III, VC & SA
      4. Tutorials
    2. Mod Showroom

      1. Scripts & Plugins
      2. Maps
      3. Total Conversions
      4. Vehicles
      5. Textures
      6. Characters
      7. Tools
      8. Other
      9. Workshop
    3. Featured Mods

      1. DYOM
      2. OpenIV
      3. GTA: Underground
      4. GTA: Liberty City
      5. GTA: State of Liberty
    1. Red Dead Redemption

    2. Rockstar Games

    1. Off-Topic

      1. General Chat
      2. Gaming
      3. Technology
      4. Programming
      5. Movies & TV
      6. Music
      7. Sports
      8. Vehicles
    2. Expression

      1. Graphics / Visual Arts
      2. GFX Requests & Tutorials
      3. Writers' Discussion
      4. Debates & Discussion
    1. News

    2. Forum Support

    3. Site Suggestions

Alexander Blade

[V] Script/Native Documentation and Research

Recommended Posts

unknown modder

Does anyone have script dumps from b505 or earlier?

if you have the script_rel.rpf from there i can do one

Share this post


Link to post
Share on other sites
Kryptus

 

Does anyone have script dumps from b505 or earlier?

if you have the script_rel.rpf from there i can do one

 

So could I :p

Just can't think of a way to get prior versions of the game.

Share this post


Link to post
Share on other sites
mockba.the.borg

I guess you could google for 2505-GTA_V_Patch_1_0_505_2.zip ... you should be able to find it.

Share this post


Link to post
Share on other sites
Eddlm

I do backup the reference every month. This one is from 10 days ago: http://www.mediafire.com/file/04awazwtpv8mt3i/reference.html

In the name of humanity, THANKS.

 

 

 

This documentation tool uses a single Markdown file for its source. Formatting requires some markdown knowledge.

 

The idea would be to make changes to the common source repo file, do a pull request and then one of a team of people can approve. A hook would be set up to automatically update when Master is updated. If the site goes down, the repo is unaffected.

 

Couple questions:

 

  • Is Native DB too unstable to continue to work with?
  • Is using git too high a barrier for making casual contributions to this?
  • I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

 

I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.

 

So I am interested in feedback.

 

Than you too, for this project.

 

>Is Native DB too unstable to continue to work with?

Looks like. Its very good at its job, but bots are always screwing it. Is the main source of my native-handling knowedge, so I need it badly.

 

>Is using git too high a barrier for making casual contributions to this?

No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

 

>I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

 

>I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.
Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.
Edited by Eddlm

Share this post


Link to post
Share on other sites
jfoster

In reference to my previous post, Testing version of new native documentation site is here: http://138.68.41.182/

 

>No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable? I could make changes to allow a collapsing behavior if that is what is meaningful.

 

>I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

The current version doesn't let you make changes, it has to be recompiled from a markdown file to work. But I would still appreciate feedback on the format of the ones I stubbed out there. I need to know it makes sense before I invest more time in this.

 

>Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.

Okay. I think a test of this group-managed git project where several-to-many people can approve pull requests that cause the docs to auto-regen (update) might be a good step.

 

To move forward on this I think we need:

1. Clarity on how to format gta Native documentation.

2. A script to parse an existing reference into the agreed on format

3. A github project containing the source markdown file, with users from this thread able to approve pull requests

4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.

Edited by jfoster

Share this post


Link to post
Share on other sites
Eddlm

 

I would still appreciate feedback on the format of the ones I stubbed out there.

I need to know it makes sense before I invest more time in this.

 

 

I like the current design, but I would make it so the formatted template (currently at the right side) are located at the center too, under the native name and description.

 

So it looks like:

 

 

GET_PLAYER_PED
This native allows you to select a given player’s caharacter.
Ped GET_PLAYER_PED(Player player) // 43A66C31C68491C0 6E31E993

 

 

Basically the current design, moving the dark part under each native.

Edited by Eddlm

Share this post


Link to post
Share on other sites
jfoster

Okay, I will make the edits and do a regen for feedback. I want to make sure I get it right before working on a script for the mass export.

Basically the current design, moving the dark part under each native.

 

 

 

 

I believe the third pane of these kind of documentation projects is for code examples. Are you aware of people posting code usage examples of natives? / Would you add code examples if you were adding to or updating native docs?

Edited by jfoster

Share this post


Link to post
Share on other sites
O-Deka-K

I've been using the NativeDB and other resources (like this site) for experimenting with modding. I agree that there is a need for this documentation project. The NativeDB is a very useful resource, but the lack of security makes it frustrating to use. Unfortunately, Fireboyd78 said that Alexander Blade isn't really concerned about implementing security.

 

 

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable? I could make changes to allow a collapsing behavior if that is what is meaningful.

What I find nice about NativeDB's implementation (even though I've never actually edited it) is that the wiki-like style makes it easy to edit an individual function. Also, the expand/collapse mechanism allows you to see the all of the function names at a glance. At the same time, you can choose to expand just the functions that you're working with so you can easily refer to them while you're working.

 

 

To move forward on this I think we need:

1. Clarity on how to format gta Native documentation.

2. A script to parse an existing reference into the agreed on format

3. A github project containing the source markdown file, with users from this thread able to approve pull requests

4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.

So if I get this straight, the new procedure would be:

  • Clone the Markdown file on GitHub
  • Edit the file
  • Commit the changes
  • Submit a pull request
  • An authorized team member approves the changes
  • The website gets regenerated from the Markdown file
IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file. I'd suggest at least breaking it down by namespace. That would break it into 42 files. It might also make it easier to manage pull requests if multiple people are submitting changes.

 

Have you thought about making it a wiki? There are many available implementations. You wouldn't have to worry about regenerating the website from the source (since it's built in), and users would be able to edit directly in their browsers. As well, there would be a running history for every page, so it would be easy to see who changed something, and easy to revert changes. Even if something gets messed up, anyone can go back to look at a previous version of the page. You could probably set it up such that users need to be approved in order to be able edit, or maybe even that each edit needs to be approved (but that's generally not necessary). You'd then be more concerned with managing users, in that you'd be trying to prevent edit wars and kicking out any vandals that somehow got authorized (or maybe hacked an account).

Edited by O-Deka-K

Share this post


Link to post
Share on other sites
jfoster

> IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error. Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough. I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

> Have you thought about making it a wiki?

 

It does seem that simple editing and authentication offered by a wiki would make the most sense. The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki. It looks like there are a lot of methods to do this with MediaWiki.

Share this post


Link to post
Share on other sites
unknown modder

> IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error. Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough. I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

> Have you thought about making it a wiki?

 

It does seem that simple editing and authentication offered by a wiki would make the most sense. The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki. It looks like there are a lot of methods to do this with MediaWiki.

You'd need somethign for json, which is what nativeDB uses

Share this post


Link to post
Share on other sites
Transmet

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~14.8 Millions hashes per second
Bench on GTX 1080 : ~33 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.
Edited by Transmet

Share this post


Link to post
Share on other sites
LeFix

I still prefer git the inbuilt functions provide a safe and easy way to develope the documentation.

It might be a bit to much for efficient searching natives but those false positives rob more time.

Share this post


Link to post
Share on other sites
unknown modder

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

Edited by unknown modder

Share this post


Link to post
Share on other sites
sfinktah

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second

Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

 

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".

 

Now the longest being the sort of false positives.

 

 

It is still surely very unstable but is experimental.

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7
the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

 

I think we need to define the length of the hash, otherwise these results are meaningless.

Share this post


Link to post
Share on other sites
unknown modder
I think we need to define the length of the hash, otherwise these results are meaningless.

 

minmaxheightrangeoffsetpositionmultiplierdragtestefsdfsdfsghdfsdafefdfaferwfsdafasdsdfsadfsdfsdfdasdasdasdasaweaeweqwsadasdcweeefsfsfseretsertsfsfsersers

all permutations of those up to a max word count of 7

Share this post


Link to post
Share on other sites
mockba.the.borg

I thought you cannot really verify the validity of a native's name by hashing it.

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!

Share this post


Link to post
Share on other sites
DatBrick

I thought you cannot really verify the validity of a native's name by hashing it.

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!

The console version of the game used joaat hashes of the actual native names, instead of the randomized hashes that the PC version uses. So while we don't know the real hashes of the natives added after the PC release, we know most of the hashes for the old natives. Then it's just a matter of finding a name that matches the hash, and fits with what the native actually does.

Share this post


Link to post
Share on other sites
Transmet

 

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/Transmet92/Large-Hash-CUDA-Collider

Output examples : https://github.com/Transmet92/Large-Hash-CUDA-Collider/tree/master/Examples%20outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k

 

Time taken: 57.08sTotal Tries = 1,891,142,967Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary

EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

 

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
Anyway excuse us for the off topic.
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.
Edited by Transmet

Share this post


Link to post
Share on other sites
unknown modder

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
Anyway excuse us for the off topic.
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.

The main point i was getting as is you aren't taking advantage of the fact thats its internal state over each iteration only needs to be calculated once for a given substring. For example you could calculate the hash for

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_A

and

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_B

by calculating its state up the the last letter, then reusing that for both the hashes.

inline uint32_t joaat_state_only(const char* key, uint32_t previous_state){    while (*key){        previous_state += *key++;        previous_state += previous_state << 10;        previous_state ^= previous_state >> 6;    }    return previous_state;}inline uint32_t joaat_finish(uint32_t current_state){    current_state += current_state << 3;    current_state ^= current_state >> 11;    return current_state + (current_state << 15);}auto state = joaat_state_only("A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_", 0);auto hashA = joaat_finish(joaat_state_only("A", state));auto hashB = joaat_finish(joaat_state_only("B", state));

These could then be chained while resulting in needless recalculation

Edited by unknown modder

Share this post


Link to post
Share on other sites
Transmet
That's right, and I thought about it, but the main thing for me was to be able to get a lot of collisions, just enough to make lists full.

I did not take the time to optimize, moreover your idea would not work with a random generation as I chose.

I have tried several models of algorithms but one with more relevant collisions is the one I chose in the final.



The essential being not to have as many collisions as possible, but collisions more interesting.



More, the biggest current weak point in my code which is even more important that your optimization is well the recurrent use of the global memory GPU which slows down enormously especially for checking hashs. :lol:


Edited by Transmet

Share this post


Link to post
Share on other sites
Alexander Blade

DB backup from April 25 is restored

Share this post


Link to post
Share on other sites
Meth0d

Dude, I always wonder how do you make this kind of stuff...

It's related to game HEX codes ? You're really great, I want to learn this kind of reverse engineer, but I have no clue how to start!

 

Keep the awesome...

Share this post


Link to post
Share on other sites
unknown modder

Dude, I always wonder how do you make this kind of stuff...

It's related to game HEX codes ? You're really great, I want to learn this kind of reverse engineer, but I have no clue how to start!

 

Keep the awesome...

it involves disassembling the games unpacked executable, but its not something that can be learnt overnight

Share this post


Link to post
Share on other sites
pumaaa

The decompiled scripts link is down.

Share this post


Link to post
Share on other sites
jedijosh920

_0x6CD5A433374D4CFB changed to _CAN_PED_SEE_PED

 

Takes two parameters: ped 1 and ped 2, and return true/false whether or not ped 1 can see ped 2 in their line of vision.

Share this post


Link to post
Share on other sites
Ceiridge

Alexander Blade, please be quicker

Share this post


Link to post
Share on other sites
ItsiAdam

Alexander Blade, please be quicker

you're so inconsiderate!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • 2 Users Currently Viewing
    0 members, 0 Anonymous, 2 Guests

×
×
  • Create New...

Important Information

By using GTAForums.com, you agree to our Terms of Use and Privacy Policy.