Quantcast

Jump to content

» «
Photo

[V] Script/Native Documentation and Research

1,141 replies to this topic
unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1051

Posted 13 May 2017 - 10:39 AM

Does anyone have script dumps from b505 or earlier?

if you have the script_rel.rpf from there i can do one


Kryptus
  • Kryptus

    Developer

  • Members
  • Joined: 04 Jun 2015
  • United-States

#1052

Posted 16 May 2017 - 03:49 AM

 

Does anyone have script dumps from b505 or earlier?

if you have the script_rel.rpf from there i can do one

 

So could I :p

Just can't think of a way to get prior versions of the game.


mockba.the.borg
  • mockba.the.borg

    Punk-ass Bitch

  • Members
  • Joined: 17 Jan 2016
  • United-States

#1053

Posted 16 May 2017 - 06:19 PM

I guess you could google for 2505-GTA_V_Patch_1_0_505_2.zip ... you should be able to find it.


Eddlm
  • Eddlm

    Newie

  • Members
  • Joined: 24 Aug 2015
  • Spain

#1054

Posted 17 May 2017 - 08:19 PM Edited by Eddlm, 17 May 2017 - 08:20 PM.

 I do backup the reference every month. This one is from 10 days ago: http://www.mediafire.../reference.html

In the name of humanity, THANKS.

 

 

 

This documentation tool uses a single Markdown file for its source.  Formatting requires some markdown knowledge.

 

The idea would be to make changes to the common source repo file, do a pull request and then one of a team of people can approve. A hook would be set up to automatically update when Master is updated. If the site goes down, the repo is unaffected.

 

Couple questions:

 

  • Is Native DB too unstable to continue to work with?
  • Is using git too high a barrier for making casual contributions to this?
  • I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

 

I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.

 

So I am interested in feedback.

 

Than you too, for this project.

 

>Is Native DB too unstable to continue to work with?

Looks like. Its very good at its job, but bots are always screwing it. Is the main source of my native-handling knowedge, so I need it badly.

 

>Is using git too high a barrier for making casual contributions to this?

No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

 

>I stubbed out a few natives, but am only basically familiar with using these. Can anyone give feedback on what's translated so far? I believe I can write a script to import one of the most recent archives into the markdown file.

I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

 

>I confess, I don't know the exact way to format these descriptions. Or how big of a need there is here.
Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.

jfoster
  • jfoster

    Player Hater

  • Members
  • Joined: 26 Apr 2017
  • United-States

#1055

Posted 17 May 2017 - 09:21 PM Edited by jfoster, 17 May 2017 - 09:21 PM.

In reference to my previous post, Testing version of new native documentation site is here: http://138.68.41.182/

 

>No, in my opinion. I doubt any developer would have a problem using git. However I'd admit NativeDB's native documenting sistem was really comfortable to use.

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable?  I could make changes to allow a collapsing behavior if that is what is meaningful.

 

>I got Unknown Modder's native reference file from NativeDB, I'll see if I can, in the coming days, check your website and try to document more stuff. Although a script

The current version doesn't let you make changes, it has to be recompiled from a markdown file to work.  But I would still appreciate feedback on the format of the ones I stubbed out there.  I need to know it makes sense before I invest more time in this.

 

>Believe me. THERE IS A NEED. Over the last days I've been forced to look up on my older projects to get references on how to use some natives I needed. Its a nightmare.

Okay. I think a test of this group-managed git project where several-to-many people can approve pull requests that cause the docs to auto-regen (update) might be a good step.  

 

To move forward on this I think we need:

1. Clarity on how to format gta Native documentation.  

2. A script to parse an existing reference into the agreed on format

3. A github project containing the source markdown file, with users from this thread able to approve pull requests

4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.


Eddlm
  • Eddlm

    Newie

  • Members
  • Joined: 24 Aug 2015
  • Spain

#1056

Posted 18 May 2017 - 02:05 AM Edited by Eddlm, 18 May 2017 - 02:06 AM.

 

I would still appreciate feedback on the format of the ones I stubbed out there.

I need to know it makes sense before I invest more time in this.

 

 

I like the current design, but I would make it so the formatted template (currently at the right side) are located at the center too, under the native name and description.

 

So it looks like:

 

 

GET_PLAYER_PED
 
 
 
This native allows you to select a given player’s caharacter.
 
Ped GET_PLAYER_PED(Player player) // 43A66C31C68491C0 6E31E993

 

 

Basically the current design, moving the dark part under each native.


jfoster
  • jfoster

    Player Hater

  • Members
  • Joined: 26 Apr 2017
  • United-States

#1057

Posted 18 May 2017 - 04:49 PM Edited by jfoster, 18 May 2017 - 04:50 PM.

Okay, I will make the edits and do a regen for feedback. I want to make sure I get it right before working on a script for the mass export.

Basically the current design, moving the dark part under each native.

 

 

 

 

I believe the third pane of these kind of documentation projects is for code examples.  Are you aware of people posting code usage examples of natives? / Would you add code examples if you were adding to or updating native docs? 


O-Deka-K
  • O-Deka-K

    Moose Loose Aboot the Hoose

  • Members
  • Joined: 05 May 2017
  • Canada

#1058

Posted 18 May 2017 - 07:16 PM Edited by O-Deka-K, 18 May 2017 - 07:18 PM.

I've been using the NativeDB and other resources (like this site) for experimenting with modding. I agree that there is a need for this documentation project. The NativeDB is a very useful resource, but the lack of security makes it frustrating to use. Unfortunately, Fireboyd78 said that Alexander Blade isn't really concerned about implementing security.
 

Is it comfortable to use because of the way you can expand / collapse the categories? Can you give specific feedback on what makes it comfortable? I could make changes to allow a collapsing behavior if that is what is meaningful.

What I find nice about NativeDB's implementation (even though I've never actually edited it) is that the wiki-like style makes it easy to edit an individual function. Also, the expand/collapse mechanism allows you to see the all of the function names at a glance. At the same time, you can choose to expand just the functions that you're working with so you can easily refer to them while you're working.
 

To move forward on this I think we need:
1. Clarity on how to format gta Native documentation.
2. A script to parse an existing reference into the agreed on format
3. A github project containing the source markdown file, with users from this thread able to approve pull requests
4. A hook setup to regen the docs on http://138.68.41.182/or some new domain name any time a PR is approved.


So if I get this straight, the new procedure would be:
  • Clone the Markdown file on GitHub
  • Edit the file
  • Commit the changes
  • Submit a pull request
  • An authorized team member approves the changes
  • The website gets regenerated from the Markdown file
IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file. I'd suggest at least breaking it down by namespace. That would break it into 42 files. It might also make it easier to manage pull requests if multiple people are submitting changes.

Have you thought about making it a wiki? There are many available implementations. You wouldn't have to worry about regenerating the website from the source (since it's built in), and users would be able to edit directly in their browsers. As well, there would be a running history for every page, so it would be easy to see who changed something, and easy to revert changes. Even if something gets messed up, anyone can go back to look at a previous version of the page. You could probably set it up such that users need to be approved in order to be able edit, or maybe even that each edit needs to be approved (but that's generally not necessary). You'd then be more concerned with managing users, in that you'd be trying to prevent edit wars and kicking out any vandals that somehow got authorized (or maybe hacked an account).

jfoster
  • jfoster

    Player Hater

  • Members
  • Joined: 26 Apr 2017
  • United-States

#1059

Posted 18 May 2017 - 09:25 PM

IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error.  Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough.  I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

Have you thought about making it a wiki? 

 

It does seem that simple editing and authentication offered by a wiki would make the most sense.  The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki.  It looks like there are a lot of methods to do this with MediaWiki.

  • O-Deka-K likes this

unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1060

Posted 18 May 2017 - 10:59 PM

IMO, I find this method kind of bulky. Not so much the GitHub procedure itself, but more because people would be editing a large flat file. The Markdown file is essentially the entire database in a large text file.

 

Excellent point. editing a massive text file is cumbersome and prone to error.  Breaking the file up into sections is more doable, the build tool would need to concatenate them but that's trivial. I'm not sure forcing contributors to learn markdown reduces the barrier to entry enough.  I am wondering if it needs a familiar i.e. wikipedia-style, if not wysiwyg editing interface.

 

Have you thought about making it a wiki? 

 

It does seem that simple editing and authentication offered by a wiki would make the most sense.  The trick would be getting it to display the code documentation in a decent style. That said, native db's barebones style seems to have been more than enough, so why shoot higher here. Solve the griefing problem instead.

 

There is still an issue of mass importing a reference into a new format or wiki.  It looks like there are a lot of methods to do this with MediaWiki.

You'd need somethign for json, which is what nativeDB uses


Transmet
  • Transmet

    LS:MP Leader & Developper

  • Members
  • Joined: 01 Aug 2014
  • France

#1061

Posted 22 May 2017 - 04:20 PM Edited by Transmet, 24 May 2017 - 12:03 PM.

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/T...h-CUDA-Collider

Output examples : https://github.com/T...xamples outputs

 

Bench on GTX 760 : ~14.8 Millions hashes per second
Bench on GTX 1080 : ~33 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

 
The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

LeFix
  • LeFix

    Burning Tube Man

  • Members
  • Joined: 13 Jul 2015
  • Germany

#1062

Posted 22 May 2017 - 05:59 PM

I still prefer git the inbuilt functions provide a safe and easy way to develope the documentation.

It might be a bit to much for efficient searching natives but those false positives rob more time.


unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1063

Posted 22 May 2017 - 06:13 PM Edited by unknown modder, 22 May 2017 - 06:31 PM.

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/T...h-CUDA-Collider

Output examples : https://github.com/T...xamples outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

 
The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k
 

Time taken: 57.08s
Total Tries = 1,891,142,967
Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary
EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat


sfinktah
  • sfinktah

    Player Hater

  • Members
  • Joined: 03 Jul 2016
  • Australia

#1064

Posted 22 May 2017 - 07:00 PM

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )
 
Source : https://github.com/T...h-CUDA-Collider
Output examples : https://github.com/T...xamples outputs
 
Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second
 
Yesterday, i found 4 natives :
SET_FAKE_WANTED_LEVEL
SET_RENDER_HD_ONLY
ADD_REPLAY_STAT_VALUE
GET_TIME_AS_STRING
 
The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".

Now the longest being the sort of false positives.

 
It is still surely very unstable but is experimental.

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k
 
Time taken: 57.08s
Total Tries = 1,891,142,967
Total found = 7
the output was garbage cause i was just using random sh*t in the dictionary
EDIT: that was running at stock clock speeds too.
EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat


I think we need to define the length of the hash, otherwise these results are meaningless.

unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1065

Posted 22 May 2017 - 07:11 PM

I think we need to define the length of the hash, otherwise these results are meaningless.

 

min
max
height
range
offset
position
multiplier
drag
teste
fsdfsdfs
ghdfsdafe
fdfaferwfsdafasd
sdfsadfsdfsdf
dasdas
dasdasawe
aeweqw
sadasd
cweeefs
fsfsere
tsertsfs
fsersers

all permutations of those up to a max word count of 7


mockba.the.borg
  • mockba.the.borg

    Punk-ass Bitch

  • Members
  • Joined: 17 Jan 2016
  • United-States

#1066

Posted 23 May 2017 - 06:19 PM

I thought you cannot really verify the validity of a native's name by hashing it. 

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!


DatBrick
  • DatBrick

    Brick

  • Members
  • Joined: 08 Nov 2015
  • United-Kingdom

#1067

Posted 24 May 2017 - 12:54 AM

I thought you cannot really verify the validity of a native's name by hashing it. 

Especially because the hashes of the natives change more or less on every new game release.

Or am I missing some point here?

 

I am able to get all entry points from the game's natives table, I am able to track the registration of natives as they happen when the game loads, but I don't see any way to guarantee that a native's name is valid.

 

Any ideas on that would be awesome!

The console version of the game used joaat hashes of the actual native names, instead of the randomized hashes that the PC version uses. So while we don't know the real hashes of the natives added after the PC release, we know most of the hashes for the old natives. Then it's just a matter of finding a name that matches the hash, and fits with what the native actually does.

  • mockba.the.borg likes this

Transmet
  • Transmet

    LS:MP Leader & Developper

  • Members
  • Joined: 01 Aug 2014
  • France

#1068

Posted 24 May 2017 - 12:03 PM Edited by Transmet, 24 May 2017 - 01:11 PM.

 

 

I made a CUDA Bruteforcer for Natives ( only work with joaat hashs )

 

Source : https://github.com/T...h-CUDA-Collider

Output examples : https://github.com/T...xamples outputs

 

Bench on GTX 760 : ~3.5 Millions hashes per second
Bench on GTX 1080 : ~11 Millions hashes per second

 

Yesterday, i found 4 natives :

SET_FAKE_WANTED_LEVEL

SET_RENDER_HD_ONLY

ADD_REPLAY_STAT_VALUE

GET_TIME_AS_STRING

 
The unknowns natives hashs is not update, but you can edit the unknowns, dictionnary of start words and other words array in "dict.h".
Now the longest being the sort of false positives.

 

It is still surely very unstable but is experimental.

 

You really need to optimise the hell out of that. my cpu bound brute forcer can do 35 Megahashes per second on an i5 2500k
 

Time taken: 57.08s
Total Tries = 1,891,142,967
Total found = 7

the output was garbage cause i was just using random sh*t in the dictionary
EDIT: that was running at stock clock speeds too.

EDIT, those cards are easily capable of over 1000MH/s for a linear simple hash function like joaat

 

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

 
I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
 
Anyway excuse us for the off topic.
 
 
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.

unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1069

Posted 24 May 2017 - 05:12 PM Edited by unknown modder, 24 May 2017 - 05:17 PM.

 

Yes, i also have 38 MH/s with my i5 ( mono-thread ) but with a sequential generation ( generate natives ).

But i use a random distribution for quicker results and interesting, which dramatically slows (even on a GPU).

 
I did a few optimizations, and now, i have 14.8 MH/s on my GTX 760 and 33 MH/s on GTX 1080.
Without the randomize, i have several hundred of MH/s, but the results are less relevant.
We are looking for collisions with real meaning, not simple collisions to bypass a safety system...
But it is true that I could have much more optimized, but it remains still much more powerful than CPUs.
 
Anyway excuse us for the off topic.
 
 
Maybe if we focus a bruteforcer on a single namespace by removing words that do not make sense in that namespace, we should have more positive results.

The main point i was getting as is you aren't taking advantage of the fact thats its internal state over each iteration only needs to be calculated once for a given substring. For example you could calculate the hash for

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_A

and 

A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_B

by calculating its state up the the last letter, then reusing that for both the hashes.

inline uint32_t joaat_state_only(const char* key, uint32_t previous_state){
    while (*key){
        previous_state += *key++;
        previous_state += previous_state << 10;
        previous_state ^= previous_state >> 6;
    }
    return previous_state;
}

inline uint32_t joaat_finish(uint32_t current_state)
{
    current_state += current_state << 3;
    current_state ^= current_state >> 11;
    return current_state + (current_state << 15);
}
auto state = joaat_state_only("A_REALLY_LONG_STRING_THAT_WOULD_TAKE_MANY_CYCLES_", 0);
auto hashA = joaat_finish(joaat_state_only("A", state));
auto hashB = joaat_finish(joaat_state_only("B", state));

These could then be chained while resulting in needless recalculation

  • Transmet likes this

Transmet
  • Transmet

    LS:MP Leader & Developper

  • Members
  • Joined: 01 Aug 2014
  • France

#1070

Posted 24 May 2017 - 07:12 PM Edited by Transmet, 24 May 2017 - 07:25 PM.

That's right, and I thought about it, but the main thing for me was to be able to get a lot of collisions, just enough to make lists full.
I did not take the time to optimize, moreover your idea would not work with a random generation as I chose.
I have tried several models of algorithms but one with more relevant collisions is the one I chose in the final.
 
The essential being not to have as many collisions as possible, but collisions more interesting.
 
More, the biggest current weak point in my code which is even more important that your optimization is well the recurrent use of the global memory GPU which slows down enormously especially for checking hashs.  :lol:

TheMuggles
  • TheMuggles

    Player Hater

  • Members
  • Joined: 18 Mar 2017
  • United-Kingdom

#1071

Posted 02 June 2017 - 01:03 PM

_0x92F0DA1E27DB96DC

Renamed to _SET_NOTIFICATION_BACKGROUND_COLOR

Parameters: [p1] - int colour

 

Changes the background colour of a map notification, using colour indexes:

https://gyazo.com/68...5a8729e48216e15

  • Kesha_F1, jedijosh920 and Kryptus like this

Alexander Blade
  • Alexander Blade

    Come As You Are

  • Members
  • Joined: 05 Nov 2006
  • None
  • Best Tool 2016 [OpenIV]
    Major Contribution Award [Mods]

#1072

Posted 05 June 2017 - 05:15 AM

DB backup from April 25 is restored
 

  • ikt and Mr.Arrow like this

Meth0d
  • Meth0d

    Player Hater

  • Members
  • Joined: 06 May 2016
  • Brazil

#1073

Posted 05 June 2017 - 09:37 AM

Dude, I always wonder how do you make this kind of stuff... 

It's related to game HEX codes ? You're really great, I want to learn this kind of reverse engineer, but I have no clue how to start!

 

Keep the awesome...


unknown modder
  • unknown modder

    Bon Jon Bovi

  • Members
  • Joined: 04 Jul 2012
  • United-Kingdom

#1074

Posted 05 June 2017 - 01:58 PM

Dude, I always wonder how do you make this kind of stuff... 

It's related to game HEX codes ? You're really great, I want to learn this kind of reverse engineer, but I have no clue how to start!

 

Keep the awesome...

it involves disassembling the games unpacked executable, but its not something that can be learnt overnight


pumaaa
  • pumaaa

    Player Hater

  • Members
  • Joined: 06 Jun 2017
  • Germany

#1075

Posted 07 June 2017 - 02:48 PM

The decompiled scripts link is down.


Unknown_Modder
  • Unknown_Modder

    Staff at GTA5-Mods.com

  • Members
  • Joined: 07 May 2015
  • Germany

#1076

Posted 07 June 2017 - 04:06 PM

The decompiled scripts link is down.

https://www.gta5-mod...ed-scripts-b757


jedijosh920
  • jedijosh920

    ⭐⭐⭐⭐⭐

  • Members
  • Joined: 01 Mar 2012
  • United-States

#1077

Posted 07 June 2017 - 05:01 PM

_0x6CD5A433374D4CFB changed to _CAN_PED_SEE_PED

 

Takes two parameters: ped 1 and ped 2, and return true/false whether or not ped 1 can see ped 2 in their line of vision.

  • ikt, Fun 2, kagikn and 4 others like this

Kryptus
  • Kryptus

    Developer

  • Members
  • Joined: 04 Jun 2015
  • United-States

#1078

Posted 15 June 2017 - 04:59 AM

 

GTA V Native hash translation table from b944 to b1011 .

 

http://pastebin.com/yz3bxJSs

 

pls


Ceiridge
  • Ceiridge

    Player Hater

  • Members
  • Joined: 16 Jun 2017
  • Germany

#1079

Posted 16 June 2017 - 11:20 PM

Alexander Blade, please be quicker


ItsiAdam
  • ItsiAdam

    Memer

  • Members
  • Joined: 08 Jun 2016
  • United-Kingdom

#1080

Posted 17 June 2017 - 12:40 AM

Alexander Blade, please be quicker

you're so inconsiderate!

  • Byzantine and jedijosh920 like this




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users