Honda Insight Forum banner
1 - 20 of 55 Posts

· Registered
Joined
·
8,686 Posts
Discussion Starter · #1 ·
It's been about a year and a half since I installed what I said would be the last Insight NiMH pack I'd work on/with, having gotten tired of it all. But, this original pack in its reconditioned state has been working awfully well and has been producing some surprising results. It works in ways that I never thought possible with Insight NiMH. That's mainly why I'm here, one particular aspect, that makes we wonder about the car's stock management and the way the pack - a good pack - should actually perform...

It's all kind of complicated so I'll try to stay as focused as possible on only the 'main thing'. That main thing is this: I can use the pack at what seems like an historically low state of charge, get full assist, and have voltage remain high and stable despite that low state of charge, cool temps, and relatively high assist discharge... This, plus what I've seemingly done to achieve this, plus what appears to be improvement over time/usage, makes me seriously question whether the stock pack management has a major flaw, a flaw that causes deterioration...

In a nut shell, I finagle pack management with the OBDIIC&C and calpod clutch switch, basically forcing usage of the pack in a relatively low charge state. This has allowed me to get a better read on what happens 'down there'. You gotta do this 'finagling' most of the time, otherwise, the car will background charge and such and keep the pack at a higher charge state.

The thing is, if you do this, the pack - my pack, at least - will fairly quickly... 'acclimate' at the lower charge state and it will start looking and performing just as it does at the higher charge state... I'm mainly talking about voltage. If you start at a high-ish charge state and discharge (use assist) for a long stretch, pack voltage will drop and stay pretty low, say around 152V instead of around 161V. The car will background charge. But, if you purposely try to use the pack in that low charge state, doing assist and regen around say 40% state of charge, the voltage will fairly quickly come back up - it will gravitate toward about 161V, not that lower 152V level that it started at...

Say you discharge from 70% to 45%. Pack starts at 164V at rest and ends at 152V at rest. The car tries to background charge but you hit your calpod switch instead, preventing it. You purposely use the pack at around 40% to 50% for a few up-down cycles. Soon, pack voltage will pop back up to around 161V - at that 45% charge state - not the 152V you started at...

Now this, I think, is the crux of the potential stock mismanagement. It appears that background charging at least sometimes responds to voltage, most likely tap voltage. The problem with this, it seems, is that there's too much 'hysteresis' in the voltage behavior... It seems pretty clear that if I force pack usage at progressively lower charge states, letting voltage 'acclimate'/adjust or whatever each time, background charging won't kick in as soon and/or as often as it otherwise would. It really looks like the car is using tap voltage at least in part to decide when to background charge: on the first cycle down, say to 45%, voltage will drop low, the car tries to background charge, I stop it with the calpod. I use the pack around the 40% level and voltage pops up. After voltage pops back up, the car doesn't try to background charge as often... But that wouldn't happen if I didn't manually, purposely manipulate the car with the calpod, turning off the background charge and using the pack at a low charge state. Rather, the car would see the first low-ish voltage, the low-ish voltage on the first cycle down, and immediately start to background charge, charging until say 70% nominal. And it will keep doing this over and over, never letting the pack get used in even modestly low charge states...

The stock management appears to charge too much, too often, keeping charge state too high too much. I think this ends up 'crudding up' the lower charge states, makes them even less usable - higher resistance, voltage depression, whatever you want to call it.

I'm seeing something like 144-145V at about 20-25 amp discharge, at 62F, at around 40% state of charge. I can also invoke full assist and get it - somewhere around 70-80 amps at 125V, with these conditions... I used to think the BCM charged the pack to around 70-75% because the pack could only perform well, like put out full assist, when the pack was charged above say 60%. I don't see it that way at all, now... Being able to use the pack like the numbers above describe, to me used to be unheard of, it was impossible.

If you've ever seen discharge curves (voltage vs. time or capacity) for Insight NiMH cells, good ones are really quite flat over most of the discharge, while bad ones, mainly ones that have whatever kind of voltage depression/memory effect, crud, whatever, will crater early on, typically well before the mid point. It's this area of the capacity - the area between the mid point and the end - that, over time, gets used less and less in the Insight. And I think it's a negative feedback loop, too: the less this area gets used, the more deteriorated it becomes - and the less it will be used on the next iteration, and so on. But, I think, if you force usage at low charge states, I think it might keep the 'curve' high, keep it from cratering early. And then, the car allows usage of that area on its own more and more. To a point, at least (I think there's hard-coded behaviors, like fixed charge state percentages, that trigger stuff. For instance, at about 38% nominal, I don't think you get as much assist, regardless of voltage)...
 

· Administrator
Joined
·
14,392 Posts
Your forcing your well balanced pack into the lower soc area which reactivates dormant material.

That's why deep discharging with lightbulbs etc works as well. It's basically the same thing but because of the low currents can be used with poorly balanced packs with a smaller risk of damage..

In the UK cvt rally car the stock battery always performed very well when given a real hammering and used until it was empty and then charged with tickover regen until full.

Exercising the pack over the full range is definitely beneficial.. Same for humans.. :)
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #3 ·
Your forcing your well balanced pack into the lower soc area which reactivates dormant material. That's why deep discharging with lightbulbs etc works as well. It's basically the same thing but because of the low currents can be used with poorly balanced packs with a smaller risk of damage.
hmm, I don't think it's the same process/mechanism as low current deep discharge, though it could be similar and/or analogous in ways... The problems each method addresses - low current deep discharge and what I'm describing, just forcing usage of relatively lower charge state areas - seem different. What I'm describing seems to address 'damage' or 'dysfunction' or simply normal operation that's not carefully dealt with by the BCM. Deep discharge addresses longer term, deeper seated deterioration...

I'm suggesting that usage of even normally functional Insight NiMH cells includes 'hysteresis' that the BCM doesn't adequately handle or consider. Either that or some form of deterioration sets in very very quickly - that the BCM can't deal with appropriately... I'd guess it's a combination of the two. (Or, I guess it could be that my pack is old and newer packs don't behave like this - which I don't know since I've never had a new pack - and Honda just figured packs don't need to last forever)...

Over time, the "deeper seated" deterioration sets in - and that's when low current, I would say ultra deep discharge, is needed. The electro-chemistry that happens with that isn't the same as what would happen with simply using the pack at normal charge and discharge rates at merely somewhat lower charge states than the car usually sticks to. I mean, around 40% charge state isn't that low; it's simply lower than what the car normally pushes toward... But, it IS in a range - roughly below 50% - that seems to get 'eaten up' quickly in normal day to day operation. I've discharged sticks/cells after various forms of partial usage (i.e. not full cycles) and it seems par for the course that discharge curves will get depressed (voltage sag over the second half of the curve, mid point voltage happening earlier and earlier in the discharge)...

I'm not actually sure whether using the pack at lower charge states will preserve a normal, flatter, loftier discharge curve. I get the curve normal through ultra deep discharge; I'm thinking using the pack at lower than normal charge states will help keep it normal. I've been keeping it low for about half a year or so at this point, but it wasn't until now that I realized just how far I could take it, or rather, how much farther I could take it than I had been (I'm still not sure how low I can go; 40% seems about the limit, as 38% seems to have those 'hard-coded' nominal triggers)...

Either that or it's simply an interesting test: if one uses the pack around the 40% level day to day, rather than around 70%, without problems, without recals, etc., it's like you know your pack is good - because you can always let it charge up to 75% and - boom - you have another 35% of capacity to burn. You can purposely make the car use that low range, knowing you're on the edge, and it's easier to spot when or if failure starts to happen, too... In other words, I'm fairly sure that normal operation means a discharge curve that sags more and more, day by day. The BCM will force usage at higher and higher charge states - and that makes the problem even worse. If you can continue to use the pack around 40% day to day, you know your pack isn't suffering the degradation that heretofore has been par for the course - because if it were, you wouldn't be operating at 40%, you'd be getting neg recals. Or, you'd simply see voltage getting lower and lower, background charge kicking in incessantly, lower discharge currents, etc. etc...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #5 ·
Do you have any ideas on how to do such deep high rate discharging in a way that doesn't eventually blow a pack by inadvertently and repeatedly reversing a cell that turns out to be weak?
Not sure what "deep high rate discharging" you're referring to. The only deep discharging mentioned has been low rate. And what I'm talking about isn't deep discharging...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #6 ·
I mentioned earlier that I wasn't sure just how low I could take charge state while still being able to bring voltage back up to the levels it was at at high charge states, due to 'hard-coded' stuff. I experimented more with that and it looks like around 36% or so, at least with my 305 BCM, is the limit. I could still get voltage back up to normal high levels (around 158 to 163V or so, down around 29% to 39%), but despite that, background charge wouldn't go away on its own, and more importantly, assist was throttled.

Basically, at around 36% it's very difficult to invoke assist - it takes much more throttle, so ICE is coming into play way more, plus the assist you get is feeble. I couldn't get full blown full assist - about 50 amps was the highest, at full throttle, despite a high voltage. Partial throttle assist would kick in at around 10-15 amps only, rather than around 20+ amps (it's very easy to invoke assist in 4th gear and hold a constant 20 amp discharge or so, with even very light throttle, while pack voltage hovers around 144V, so I'm very familiar with what usually happens)...

I recall it's been mentioned that different BCMs behave differently at this low end; I recall Eli at Bumblebee mentioning that something like the older BCMs don't throttle assist much if at all at the bottom. I'm tempted to install my really old BCM, I think it's an 010 with a really low serial number (that secondary number/code that the OBDIIC&C reads)...

One other thing I didn't mention that I'm wondering about, that I haven't really verified - is whether 'the top end' will actually fill-up normally after a lot of use at the low end. Earlier I said that if you're operating at the low end you always know you've got all that top end at your disposal - 'just charge back up and - boom - you've got another 35%' of capacity. But truthfully, when I go to charge up I could get 'premature pos recal' - a pos recal to 75% well before the true 75% mark.

In my head I'm envisioning that usage at the low end is preserving the flatness of the discharge curve, the loft in the mid point to lower part of the curve. When you charge ultra deep discharged cells (or deeply self discharged cells), voltage always goes high almost from the start and stays high. If you charge a cell or stick that's been used to some degree, voltage stays pretty low when you charge from empty and only gradually gets high. I've always wondered what that's all about. Seems like something closer to the 'charge after ultra deep discharge' pattern would be, better.

That's one thing I've seen during my low end experimentation, too: the more I've done this, the easier voltage seems to pop back up to the chemistry's equilibrium voltage (something like 1.318V per cell, so 158.16V for the pack)... Rather than seeing a somewhat unusual response to normal cells, it does seem like I could be seeing a somewhat unusual response to cells that are undergoing some kind of reconditioning... It just seems odd that it'd take this kind of use, after a year and a half of more or less normal use and some reconditioning stuff, to actually see a difference. I guess the main question will be whether it actually sticks; I've seen similar behavior before - not as clear cut and extreme, but similar nevertheless - but it has always been more or less transient. You drive, you use the pack, it warms up, cells 'get primed', they perform better. The next day you'd have to go through this whole usage scenario again to see the higher-level performance. Here/now I'm thinking the higher-level performance at such low charge states might actually stick around. What's more, I've seen higher performance at higher charge states as well - voltage stays higher longer, and more stable... Instead of seeing, say 145V at 20-25 amps at 70 degrees F, I might see 152V under the same conditions... Or even higher.

The basic question/scenario here has to do with that voltage hysteresis and whether different usage and reconditioning can produce a more favorable...voltage response. For instance, instead of voltage sagging and staying low after a relatively long assist event, voltage might stay higher longer and pop back up faster when it does drop low under load... I've described the way Insight NiMH works as being like a moving window - the usable capacity window moves around depending on where the NiOOH/NiOH2 interface is positioned in the cell, and its shape changes depending on - something, perhaps the 'cleanness' of the ionic pathways, the amount of electrolyte in the vicinity, et al. I wonder whether different usage can grow this window - make it bigger, make it stick around longer. It kind of seems like something like that must be happening when I'm seeing voltage stay higher longer but at lower charge states, and when I'm seeing higher voltages under load at higher charge states...

I'm pretty much a believer in what I'll call the 'mowing lawn theory of Insight NiMH', kind of what Peter described above. If you leave an area of your lawn un-mowed, it ends up growing in ways way different from the rest that you did mow. You gotta mow the whole thing if you want it to look the same. It's kind of the same way with Insight NiMH: if you want the cell to perform similarly over the whole charge state range, you gotta use the whole cell. Otherwise, areas of it will grow in different ways and end up performing differently... And this gets us back to why the stock battery management might be seriously flawed: one flaw might be that it concentrates usage in a very small charge state area (roughly a 10% window around, well, nominally 65% to 75%, but I think that tends to shift higher with time and degradation, so it ends up being around 75% to 85%. On the other hand, '75%' and '85%' - these values lose almost all meaning the way the stock management works, the way usage under this management slowly shrinks total capacity and concentrates usage higher and higher)...

To this there's just one bit I'll add: after this kind of degradation, the more serious problem might become uneven self discharge, at which point usage will be concentrated lower and lower, not higher. The higher charge states will languish/degrade because they're never used. Total capacity shrinks because you've got at least one cell that drains faster and hits the low voltage cutoff first and early, and then the other cells don't charge up as much because the high charge state degradation means high resistance, higher voltage, and hitting the upper voltage cutoff too soon...
 

· Administrator
Joined
·
14,392 Posts
hmm, I don't think it's the same process/mechanism as low current deep discharge, though it could be similar and/or analogous in ways... The problems each method addresses - low current deep discharge and what I'm describing, just forcing usage of relatively lower charge state areas - seem different.
I think your method is just reaping the results of reactivating material and reducing depression etc.
Your exercising the pack and using power from low down soc and keeping awake material that would otherwise start to become lazy and depressed in the cars normal narrow soc window..

The normal way of doing it with low current discharge etc reduces the chance of reversal damage, and reaches deeper into the inactive material / depression to provide even more improvement in most cases..

I'm not saying don't do it if it produces results and is easy to do, but it may not be optimal.

As we know individual cell voltage measurement, and charge/discharge capability would be the ideal but it's impracticable.. :(
Then we would really know what is going on.. :)
Keep tinkering though..
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #8 · (Edited)
^ Nah, I don't think you're understanding what I'm doing, what I'm describing. I don't blame you. I've already worked with this pack 'reconditioning'. I've seen what each stick does on the bench, I've examined each cell, etc...

[Later...] I think the most important aspect here is that, regardless of what reconditioning I've done, the pack is performing at a higher level only after I've circumvented the car's stock pack management. This is positive behavior/results that I would otherwise not be seeing.

The take away for lay folk is that they should probably be trying to use the pack at lower charge states, rather than letting the car keep charging the pack over and over again to high charge states and keeping usage within a narrow, high charge state window.

My sense is that stock battery management should have been engineered to use the pack in the middle, with cycling dipping down and peaking up from the middle, not perpetually focused in the higher 65%+ range... I've done a lot to this pack (and others) and I've driven with it with various usage patterns. But I've never forced such extended low usage before - and I've never seen the degree of improvement I'm seeing. Is it a coincidence - use pack at low charge state > see unprecedented improvement? Possible, but I'm pretty sure it's not...

Granted, this low usage follows on the heals of a tap-level ultra deep discharge, but I've done that before and I didn't see the degree of improvement I'm seeing now... Maybe the ultra deep discharge plus forced low charge state usage are working together... I mean, they do to a certain extent, as the ultra deep discharge is partially what allows the pack to be used so low in the first place. But I imagine whatever conditioning is happening by forced low charge state usage would take place whether one's pack were ultra deep discharged or not. Maybe. Or the flip side, whatever 'conditioning' (i.e. degradation) is happening normally with the car forcing usage in the high charge state window should be undone with intentional pack usage in the low charge state... I'm really leaning toward this side. It's not so much adding a positive - some 'reconditioning' - but rather, it's undoing a negative - circumventing the concentrated high charge state usage that the BCM imposes...
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #9 ·
I was skimming through an old Panasonic NiMH Technical Handbook and came across a rather matter-of-fact statement - something I've seen before yet never fully appreciated - that supports the notion, explained above, that Honda's battery management is most likely flawed, maybe seriously so. If true it would explain why my pack performance continues to shine - as long as I manually thwart OEM management and force low charge state usage...

Pretty sure most BCMs don't allow discharging below about 132 volts, or 1.1V per cell (13.2V tap level). Background charge/forced charging usually kicks-in well before that or, if assist is in play, assist throttling ends up maintaining about 13.2V. If you go full throttle assist you can drop voltage to 120V/12V, but in general, the level trying to be maintained is 132V/13.2V. The problem with this is that discharge voltage characteristics suffer - you end up with voltage sag - if cells aren't discharged low enough. Panasonic says they should be discharged to 1V... This is age-old stuff - it's 'memory effect' and 'voltage depression', but here it is, plain as day, in Panasonic's own words:

"Discharge characteristics
The discharge characteristics of Nickel-metal hydride batteries are affected by current, temperature, etc., and discharge voltage characteristics are flat at 1.2V, which is almost the same as Ni-Cd batteries. Discharge voltage and discharge efficiency decrease proportional to current rise or temperature drop. As with Ni-Cd batteries, repeated charge and discharge under high discharge cutoff voltage conditions (more than 1.1V per cell) causes a drop in the discharge voltage, which is sometimes accompanied by a simultaneous drop in capacity. Normal discharge characteristics can be restored by charge and discharge to a discharge end voltage down to 1.0V per cell."

Certainly over time and usage our packs degrade due to this: Packs usually only get discharged to the equivalent of 1.1V per cell, cells never see this 'Panasonic restoration voltage' of 1V. It'd actually be worse if you drove at night all the time, where background charge kicks-in at a high charge state, or if you don't use much assist. I also think it can happen very quickly, though, perhaps more so or only with older packs...

Again, the takeaway is that, if you want your pack to last and/or to work better, you should be trying to force low charge state usage. Initially this may require that you do some kind of deep discharge/grid charge, because if your pack is imbalanced or too crudded-up you won't be able to take it low in the first place. But after that, I think something like, at minimum, a weekly low charge state usage drive, is needed to maintain pack performance.

Here's a link to a post that nicely sums-up the more technical, electro-chemical explanations for all this: https://www.insightcentral.net/thre...working-ima-battery.81778/page-11#post-912394
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #10 ·
I'm still thinking about low charge state usage, somewhat experimenting with it. I've had a cell with slightly faster self discharge in my main pack, and when I let the pack sit for a month I ended up having to deal with that. I basically pulled the offending stick, balanced the cells, and put it back in. But I haven't done anything to the rest of the pack, besides in-car 'manual methods'. I used the pack at high charge states to take advantage of ways the pack can be balanced in the car, and I've mainly been using the pack just above the neg recal point. This 'just above neg recal point' usage is what's on my mind...

I continue to be blown away by just how well a pack can work at extremely low charge state. What's more, I'm becoming increasingly convinced of how a much more regimented battery management could wring-out way more use of the OEM NiMH packs. Part of this is my growing awareness of just how little capacity one needs for like 95% of the driving we do... A super-tightly-controlled battery management could pretty easily get away with 10% of the stock capacity - so 650mAh. I'd say 20%, or about 1300mAh, is all we'd really need - if the cells could take it.

Most of the usage should be handled as if the battery were more like a capacitor bank. I'm thinking Honda had that kind of usage in mind -- that most of the capacity of the stock pack is there simply as a buffer, to handle high power demands, especially under deteriorated or sub-optimal conditions, and for longevity. As packs deteriorate over time and usage, demands remain modest and so the excess 'buffer' capacity can come into play and handle it (provided the deterioration isn't the debilitating kind, which it wouldn't be if the battery management were appropriate).

You might use a few hundred mAh per assist pop, like 80% of IMA usage; another 15% might be of the slightly longer duration, like up to 10% of capacity, so 650mAh. The remaining 5% of usage might be of the more or less unnecessary, highway incline, longer duration assist -- the kind where you get very little benefit, you can simply downshift and maintain speed and mpg... I think not realizing this stuff is a major misconception: I think we naturally tend to think of the pack as a reserve for extended assist, when in reality it's intended to provide short-term boost and little more...

The flip side of this is regen: using the pack at low charge states helps to maximize regen. I think stock management neglects regen: with charge state maintained high, regen is often throttled and is less efficient. Plus, it probably places more of a burden on the cells, where oxidizing conditions are more severe and you have a greater chance for unwanted reactions...

One general pattern of a better battery management should be maintaining at most a 20% usable capacity. It should be low enough to allow max regen. And I'm tending to think most of the time it should be low charge state, not high middle or high like it is in OEM form. Low charge state usage allows the cells to drop to low voltages, such as that 1V level mentioned in the Panasonic excerpt I posted earlier. As long as the cells are reasonably balanced, I'm not seeing any problems posed by cells dropping low; on the contrary, almost all my work shows a palliative effect from dropping voltage low...

Here's a graph I posted in another thread (or 2) that shows what low voltage usage does for a cell's discharge voltage curve. The yellow curve was a fast self discharge cell, so it was getting dropped to a lower voltage than other cells. Look how high the voltage curve is compared to the other (black) curve. If ALL the cells had such lofty curves, it would be like adding another 18 volts to the pack. There's a more detailed explanation at the link below the graph.


I'm not positive, but I'm pretty sure my low charge state usage - at this point hovering just above empty for some days - has lifted the 'black cells' closer to the level of the yellow one.

In conclusion (as they say), a better battery management would be able to maintain the lofty yellow curve (but of course across the whole charge state range); it wouldn't let cells get like the black curve. At any given time we want our cells optimized to put out the most power and take in the most power, not put out low power over a long time. I think OEM management isn't quite attuned to making this happen. It doesn't go low enough, it goes too high too often, and in the process cells end up deteriorating.
 

· Super Moderator
Joined
·
9,833 Posts
I was skimming through an old Panasonic NiMH Technical Handbook and came across a rather matter-of-fact statement -

"Discharge characteristics
The discharge characteristics of Nickel-metal hydride batteries are affected by current, temperature, etc., and discharge voltage characteristics are flat at 1.2V, which is almost the same as Ni-Cd batteries. Discharge voltage and discharge efficiency decrease proportional to current rise or temperature drop. As with Ni-Cd batteries, repeated charge and discharge under high discharge cutoff voltage conditions (more than 1.1V per cell) causes a drop in the discharge voltage, which is sometimes accompanied by a simultaneous drop in capacity. Normal discharge characteristics can be restored by charge and discharge to a discharge end voltage down to 1.0V per cell."
I've read this entire thread a couple of times trying to decipher how your routine differs from deep discharging during normal restorative grid cycling. (Brevity isn't your strong suit;).) It looks like the fundamental difference is in the time that the battey spends in a low voltage state. Might one interpret this as an indication it takes more time than we normally allow with cycling to re energize the parts of the battery material lying dormant???
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #12 ·
...It looks like the fundamental difference is in the time that the battery spends in a low voltage state. Might one interpret this as...
If you're comparing my sort-of-reconditioning routine, such as tap-level ultra deep discharge/grid charge to the typical 3 cycle grid charge/discharge routine, I don't think I'd say the "fundamental" difference is the time at low voltages. For example, you can't just apply any old load, bring voltages to a low level, and wait there for a certain amount of time, such as a longer time than usual, and expect miracles...

I guess the main differences are: 1) by doing taps and using a tiny load, you can better equalize the discharge characteristics each cell sees, such as the amount of discharge and the deepness, and 2) going deeper and slower is more thorough.

Doing full-pack stuff means 120 cells in series, so every cell is subject to the discharge characteristics of every other cell. For example, one cell might be at 50% charge state at start, another might be at 10%: the 10% cell has to endure the discharge current/time that the 50% cell upholds - so the 10% cell gets way more discharge than it should -- or more accurately, it actually gets less, because the higher charged cell forces the lower charged one to blow by the charge state region that needs to be burned; the cell gets driven too low, even reversed, and gets held there. If the unevenness is too severe, the reversal can mess up the negative electrode...

Typically, full pack methods use higher currents, too, so the 10% cell endures a higher current and the above is even worse. Were the 10% cell deeply discharged on its own, the discharge characteristics would be much different, suited to just that cell. When we go down to the tap level, we reduce the risk of discharge unevenness, because we're only dealing with 12 cell strings. Plus, the tiny load takes resistance mismatches out of the equation...

The other main part of 'my routine' is using the pack at low charge states. This seems to achieve various things:

1) It preserves the 'loft' of voltage discharge curves, i.e. prevents voltage sag/memory effect/voltage depression,

2) it better preserves balance, both because it reduces the tendency toward relapse into uneven depressed voltages from cell to cell, but also because if you have any fast self discharge cells you reduce the amount that can be self discharged. If cells are usually at 70% charge state, you have 70% of potential imbalance, with one cell self discharging 70% and the others self discharging nothing. If cells are usually left at say 30%, the potential imbalance is reduced to only 30% and you always have headroom to charge more.

3) Using low charge state keeps the pack optimized, cells are more efficient, management is more efficient.
 

· Super Moderator
Joined
·
9,833 Posts
In this thread, you were talking about operating the pack in a lower state of charge than is commanded by the normal battery management routines in the car. So, last half of you post above applies to my question.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #14 · (Edited)
^ I see. But ultimately I think they both apply. If you don't start with balance you don't end up using low charge state -- some cells will be low, others higher. That's probably the number one problem in the car: the car doesn't allow much discharge below 132V, or 1.1V per cell. But that's only if ALL the cells are balanced.* In the likely typical pack condition, cells are undoubtedly at least somewhat uneven, where a few cells might get down to 1.1V, while all others are above say 1.2V. And then more imbalance sets in, and it's a vicious cycle...

*It's actually even worse -- because even to get down to a mere 1.1V (132V) you have to purposely 'go there'. Most Insight drivers probably rarely if ever get their packs much below 140V, no fault of their own, as you'd expect the battery management to do the proper thing. But I don't think it does.
 

· Registered
Joined
·
2,484 Posts
These are good musings. I'm having a hard time following this, mostly because of sleep deprivation. Some thoughts:
I think we naturally tend to think of the pack as a reserve for extended assist, when in reality it's intended to provide short-term boost and little more...
My CVT would agree with you. It seems to stay at lower RPMs only when the load is low. It seems to run at a much higher RPM (with proportionally less assist) than I would run the manual when I'm doing anything other than just tooling along a flat road at 35 MPH.
1) by doing taps and using a tiny load, you can better equalize the discharge characteristics each cell sees, such as the amount of discharge and the deepness, and 2) going deeper and slower is more thorough.
This may be correct, but I think the next statement is more important, discussing the "light bulb" full pack discharge:
the cell gets driven too low, even reversed, and gets held there. If the unevenness is too severe, the reversal can mess up the negative electrode...
Yeah! The first cell to reverse is going to reverse while the pack still has a lot of voltage! So you might drive, what, an amp or more through that cell, reversed? I know for a fact (through an experiment earlier this year) that a good way to kill a weak cell fast is to put a lot of current through it reversed.

I want to know more about when the BCM decides to kill assist - how close did a cell in a stick get to reversal? Does the "canary cell" get repeatedly insulted (damaged) just a little bit from too-low discharge and does this eventually accumulate on that particular cell?
Typically, full pack methods use higher currents, too, so the 10% cell endures a higher current and the above is even worse.
As will the next few cells that reverse after it, using the "light bulb discharge" method. Maybe with the ECM, only the worst cell gets insulted before the BCM unloads the pack. So it makes me nervous to hook up a light bulb to a pack and walk away.

I would really prefer making a jig that conditions a pack at the stick level or even the cell level, (high rate until cell voltage drops to a certain point, then reducing current appropiately) but I don't know where I'm going to find the time to build this. An LTO conversion seems to be a better investment in time/money, unless one builds the conditioner out to handle max one stick at a time, and has a spare pack to run while the other is torn apart and being conditioned.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #16 ·
...I want to know more about when the BCM decides to kill assist - how close did a cell in a stick get to reversal? Does the "canary cell" get repeatedly insulted (damaged) just a little bit from too-low discharge and does this eventually accumulate on that particular cell?....Maybe with the ECM, only the worst cell gets insulted before the BCM unloads the pack.
You had these same or similar questions in that other thread -- where I answered them in detail: https://www.insightcentral.net/thre...t-efficiency-not-so-good.117785/#post-1450776
In fact, that's when I decided I needed to make that black and yellow curve graph...

As far as when the BCM decides to 'kill assist' goes, I don't know for sure, but from what I can tell it's likely well before a cell reverses. It wouldn't be difficult for the BCM to 'watch' tap voltages and simply throttle assist when the voltage discharge slope gets steep, i.e. when at least one cell approaches empty... And, as I say in that other thread, I don't think draining a cell to near empty, what you're calling 'repeated insult', is an insult. I've flogged my pack at like a true 10% state of charge many times, and in general push it hard at very low charge state, and rather than get worse it has only gotten better...

....An LTO conversion seems to be a better investment in time/money, unless one builds the conditioner out to handle max one stick at a time, and has a spare pack to run while the other is torn apart and being conditioned.
Yeah, that's the funny thing - where this thread actually started (read the first line in the first post). I was using my NiMH pack at low charge states because I needed to know how the BCM would handle lower voltages - for the LTO build I was preparing. But then, over some months, I began to notice that the pack was more efficient, by a lot. And then I started noticing the other things, basically how great the pack was working, beyond what I thought was possible.

So, I've been torn for a while. I don't need an LTO - but I've had cells for like 3 years or so. And then I've invested so much time with the NiMH that everyhting I do builds on the last, and the knowledge I gain, and the confidence I gain in the pack, is worth something. I'm not really sure if the Fit LTO builds would be better to do or not; if more people were exploring the 'low charge state' stuff, we might find that the stock packs have a lot more life than we realize. Anyway, like I said, I'm torn between the 'vindication' I'll get (in my head) when I'm still using an original 2002 OEM pack 5 years from now, yet I've got LTO cells moldering in my closet... It's really a toss-up whether I end up building an LTO at this point or not.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #17 ·
I was re-reading this thread and thinking about the deeper electro-chemical processes taking place, which can explain why I'm seeing what I've been seeing. One thing I've always struggled with is a sort of 'spatialization' of the electro-chemistry, conceptualizing the spatial arrangement in the cell of the electro-chemistry going on. But, I think I'm understanding something more clearly, now...

When a cell charges, nickel-hyrdoxide Ni(OH)2 is converted to nickel-oxyhyrdroxide (NiOOH). When it discharges NiOOH is converted to Ni(OH2). Various stuff can happen that obstructs these basic, core reactions. For example, one of the main forms of degradation - of 'obstruction' - is the growth of large nickel-hydroxide crystals: reactions take place on the surface of crystals, so if you have larger crystals you have less surface area where the reactions can take place. The result is higher resistance, smaller usable capacity, more voltage sag, etc. Deep/ultra-deep discharging can shrink the crystals and fix these things. There's also other similar problems that discharging helps similarly.

But, what I've never been able to understand is how different discharge currents and different end voltages upon discharge can all offer improvements, and not necessarily in a hierarchical order. For example, in this thread I'm talking about improvements based primarily on using lower charge state in the car. Well, that's happening at relatively high currents and not-all-that-low voltages. What's more, I've already ultra-deep discharged these cells, at super low current: If I've already ultra-deep discharged these cells, why would I continue to see improvement from these higher current not-so-low discharges in the car?...

Well, here's the thing (and I really need to bang this out before it slips away): When you don't use low charge state, reactions are always taking place on the low hanging fruit. You have a lot of capacity, a lot of active material, at your disposal, so there's no reason for the reactions to take place anywhere but on that low hanging fruit, i.e. on the surface of whatever 'crystals' are around. Some are big, some are small, but there's enough of it in this mixed form, so why bother with the stuff that's harder to reach?. I.e. Why break down larger crystals into smaller ones if you don't have to?

If charge state is usually high, there's enough active material to support the loads demanded, simply by virtue of there being more material charged, almost in a brute force sense. When you deliberately take the cells low (or if the management did it from the start), you perpetually force the electro-chemistry to 'reach deeper' into the material - there simply isn't a lot of low hanging fruit to pick. Larger crystals, for instance, are broken up into smaller ones (and then these smaller crystals can support higher currents)...

High charge state use is like a mile wide and an inch deep, growing wider and less deep day by day; low charge state use is like perpetually tilling deeper and deeper, forcing reactions to happen in the hardest to reach 'places'. What you end up with is a cell that's deeply reconditioned, optimized...

I have to leave it there. There's a lot of loose ends and it's all pretty nebulous, a lot of analogizing. But it will have to do for now.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #18 · (Edited)
I've been thinking about a couple different things based on my most recent IMA/pack 'testing' and general driving observations, and stuff. One is my 'moving window theory' of charge state and the other has to do with top-end self discharge.

'Moving Window Theory' of Charge State Usage
I'm thinking about the moving window theory in the wake of rock-bottom charge state usage, after which I have charged the pack up, used it (seemingly) high, and I'm not really seeing what I would have expected were a 'moving window theory' of charge state not in play... I only want to briefly mention this as a sort of place-holder, create the simplest of sketches, the outlines of this "theory"...

In a nut shell, I'm thinking cells can function well, perform at a high level, within any given modestly sized charge state range, around any given charge state level, so long as the most recent charge state usage is within that range. For example, you can have high performance within a charge state window equal to say 20 percentage points of capacity, centered on true charge state of say 15%, as long as you continually(?), repeatedly(?) use the charge state between 15% plus or minus 10 points, so 5% to 25%. OR it can be a range of 20 points centered on say 60% true charge state, so 50% to 70%. But, at least with old, used cells, it sort of looks like you can't have equally high performance across the full charge state range at all times, at any given moment...

If you use predominantly low charge state, the high charge state will languish; if you use predominantly high charge state, the low charge state will languish. There do seem to be differences, and perhaps different risks and/or deterioration modes with low vs. high usage, but in general, this is what it looks like to me...

Currently I'm thinking something like this: with old, used cells, you can end up with an imbalanced amount of 'active materials'. In theory a perfect cell would have all 'active materials' in perfect balance, perfect ratios, to sustain full performance at any charge state at any time. But over time and usage, you can end up with a deficiency - say in water in the electrolyte, or maybe nickel-hydroxide - or whatever - so you don't get full performance over the entire charge state range.

BUT, you CAN exercise a limited range and all the active materials needed for full performance end up, I don't know, coalescing, aggregating, orienting in such a way that full performance within that range is perfectly possible. That's partially what I mean when I say "moving window." If you use charge state in the 50% to 70% range, repeatedly, all the stuff you need eventually gravitates to 'that area' and you can get big performance. But if you were to immediately cycle down to the low range, you wouldn't get the same good performance. Conversely, if you use the 5% to 25% range, all the stuff gravitates to 'that area' and you get full performance - but you wouldn't be able to cycle up to the higher range and immediately get the same performance...

This is what it seems like I'm seeing, post rock-bottom charge state usage and subsequent cycling up. I haven't been able to charge as high as I 'should' be able to, so it seems (it can be hard to tell the true charge state, but I'm pretty certain this is the case). After my low charge state usage, the first pos recal happened seemingly prematurely. After a few days I've been able to add about another 15 points of charge. The couple times I have gone back low I wasn't seeing the phenomenal performance I was seeing earlier, when I was using low charge state range pretty much exclusively. Hence, the 'moving window theory' of charge state.

Top-end Self-discharge
This was really what I wanted to focus on today, but this is taking too long. So, I'll just throw out a sketch here and try to get back to details later.

When you charge to high charge state with the Insight cells, or really any NiMH cells, there's faster self discharge in the upper range than the middle and lower. That's what "top-end self-discharge" refers to. I was skimming a bit of an advanced battery book and was reminded of stuff that makes what's going on pretty clear. From a practical standpoint - what may be useful to Insight drivers - I was thinking that 'pulling away' from the top, discharging a bit off the top when you park, especially if you're going to have the car sitting for more than a couple days, would probably be a good tactic to preserve cell-to-cell charge balance.

Over the last few months or so, I've come to believe that uneven self discharge is likely the most pernicious form of deterioration when it comes to our packs. Other forms of degradation seem surmountable; uneven self discharge doesn't seem surmountable. It's at least not fixable, I know of nothing that can fix a fast self discharge cell. In my work putzing with pack reconditioning and such, I'm pretty sure that uneven self discharge was the cause of every ultimate failure.

Now, assuming this is the case, that uneven self discharge, especially when it's uneven and fast, can make our packs unusable, a way to mitigate the impact is important. So that's where the 'pulling back from the top' factors in. If you're using the pack at the top of the car's usable range (which seems par for the course, due to how the BCM operates), you're probably losing a lot of charge to self discharge (SD). And, if SD rates are uneven, then you're inducing a lot of imbalance. I'd guess you could easily end up with about a point or two of imbalance just from uneven self discharge rates in the top charge state range... So, the answer is simply don't leave the car with the pack at a high charge state.

I'm guessing that discharging only a few percentage points off the top, before you park, is all that it would take. In general, I usually see a pack resting voltage above about 168V after a pos recal. That voltage will drop to something like 163V after a day or so. I'm pretty sure this voltage drop is or can be a direct correlate of the amount and/or rate of self discharge. On the other hand, if you leave the pack at, say 163V, you'll usually come back to it and the voltage will be say 163V. So, basically, I think you can preserve cell-to-cell balance, you can thwart uneven self discharge rates, if you discharge enough that resting voltage when you park the car is somewhere around 163V or lower... I think it could easily be the difference between maybe a one or two point net imbalance per day or not.

Let me take a quick look at an old dataset I have, that includes self discharge rates...

So, at a glance, a cell or two known to have faster self discharge have rates that are about double what the other cells are - double during the first few days of sitting after a full charge (the rates even out after some days). These rates are of voltage change not capacity loss. I'm gonna eye-ball it and say the rates are about 3% per day for the fast SD cell vs. 1.5% per day for the others. I also calculated the average capacity loss in mAh per mV change in voltage, based on discharges, and came up with something like 13 mAh per mV. This is all not perfectly comparable, but we just need some rough, back-of-the-envelope stuff... I also have a figure of about 50mAh loss per day for normal cells vs. about 90mAh per day for the fast self discharge cells... I think this is probably a good ballpark figure to use - call it about 50mAh loss per day due to faster self discharge. 50 mAh/ 6500mAh= about 1% loss per day. And it'd be even a bigger share of only the usable capacity.

So, discharging off the top might be the difference between avoiding about a 1% capacity imbalance per day or not. More like 2-3% per day when we consider usable capacity. And this is for a pretty modest uneveness in self discharge rates. Think about it: the risk is losing 1% per day - so after 30 (successive) days of drive, park, drive, park, you could potentially lose up to 30% of capacity in one cell relative to other cells. If cells started at say 3200 mAh of usable capacity, the fast SD cell after 30 days is now at 3200 minus (0.3 X 6500) = 1300mAh when 'fully car charged'...

Anyway, I think this is in the ballpark. If you have a 'weak' pack, try discharging off the top a bit before you park. Even if you don't have a weak pack, I think it's probably a good tactic to help maintain it. I'll try to get back to this some other time and clean it up, fill in some details, etc.

One thing I need to add before I forget:
When I talk about top-end self discharge, one question that pops up is: But what exactly is the "top-end"? Above, when I talk about top-end self discharge, I talk about the 'top' as if it were an absolute range of charge state, say above 75%. But when I talk about the moving window theory, the top can actually be anywhere, really. This is confusing.

I'm pretty sure that the "top-end" in "top-end self discharge" can be the top of the moving window, while the moving window can be anywhere within the absolute 0-6500mAh, 0-100% charge state range afforded by the raw capacity of the cells (or however much is left in old, used cells). You can see greater than, say, 163V resting voltages even at a low absolute charge state. If you're using the 5% to 25% absolute range, the 'top-end' will be around 20-25%, that is, you'll see resting voltages well above 163V at 25% if you charge from 5% to 25%. And I think the top-end self discharge idea applies here as well, even though you're at an absolute low charge state.

There's probably differences, it's probably not as bad as it would be were the 'moving window' at an absolute high charge state. But I think the mechanisms for faster self discharge are still there, at the low absolute state but at the top of the moving charge state window.

So basically, so long as pack voltage is above about 163V, you'd need to discharge a bit 'off the top' if you want to slow down and even-out cell-to-cell self discharge.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #19 ·
I've been re-reading a chapter from the old 'Huggins' advanced battery book, trying to piece together info about NiMH electrochemistry that supports the top-end self-discharge idea above, or at least provides some of the underpinnings. Yesterday I said I skimmed some of that and it made "what's going on pretty clear." I don't have the patience to quote passages and go line by line, but I figured I should try to get something down...

The gist of it is this: When the positive electrode is fully charged, NiOOH is in contact with the electrolyte. Normally, NiOOH isn't in contact with the electrolyte. I don't quite understand how that works, but it's what I've read in a number of locations. NiOOH is a "mixed-conductor," it can pass protons or electrons. Ni(OH)2, on the other hand, is not electronically conducting. So, usually or normally, Ni(OH)2 is a barrier between NiOOH and the electrolyte, that is, until the electrode is fully charged and NiOOH comes into contact with the electrolyte.

You can probably instantly see what's going on with top-end self discharge: NiOOH comes into contact with the electrolyte, reactions take place between water and the NiOOH, resulting in hydrogen being added to the electrode, converting NiOOH to Ni(OH)2. This is self discharge. With the addition of protons to the electrode, the potential drops and thus cell voltage drops... I imagine that as more and more NiOOH is converted to Ni(OH)2, Ni(OH)2 increasingly functions as a barrier layer and self discharge slows down.

In conjunction with my previous post, with 'moving window theory of charge state' - I'm thinking it's probably not a cut and dried thing, where this process can only happen at an absolute full charge; rather, I imagine it can happen whenever voltage is appreciably high, say, above the equilibrium voltage. I.e, I'm thinking this process can happen at the top of the moving charge state window.
 

· Registered
Joined
·
8,686 Posts
Discussion Starter · #20 ·
One caveat to the idea that discharging a bit 'off the top' might help mitigate uneven self-discharge: it depends on the differential impact of the high charge state circumstance on self discharge rates. I know, that's a mouthful...

I'm looking at data that show cell level self discharge rates. Rates are fast at high charge state and gradually slow down. The few cells with faster self discharge seem to have faster self discharge only at the high charge state. After some days, the self discharge rates for all the cells tend to even-out. In other words, these data show faster self discharge for a few cells vs. the rest only when the charge state is high. Thus, when you discharge off the top you bring all cells down to a charge state where self discharge rates are more even.

But, I'm thinking that in general it's possible or likely that top-end self discharge, the mechanisms for which I explained in the previous post, isn't the same as the self discharge that can make one cell's SD faster than another. 'Faster self discharge' might be a function of different mechanisms. For instance, all cells will have some base level of SD due to NiOOH contacting water and all that; on top of that, though, some cells might have a different mechanism of self discharge, say a tiny micro short or something... If you discharge a bit off the top to thwart uneven self discharge, maybe you only end up thwarting fast self discharge at the top, but not the unevenness. It only works if the SD rates are different, uneven at the top but more even lower down. That's what I see in this dataset, but I couldn't say whether that'd be the case typically.
 
1 - 20 of 55 Posts
Top